[No abstract is needed.] Each replication project will have a straightforward, no frills report of the study and results. These reports will be publicly available as supplementary material for the aggregate report(s) of the project as a whole. Also, to maximize project integrity, the intro and methods will be written and critiqued in advance of data collection. Introductions can be just 1-2 paragraphs clarifying the main idea of the original study, the target finding for replication, and any other essential information. It will NOT have a literature review – that is in the original publication. You can write both the introduction and the methods in past tense.
In Study 2 of the report “Differentiate to Regulate: Low Negative Emotion Differentiation Is Associated With Ineffective Use but Not Selection of Emotion-Regulation Strategies”, Kalokerinos and colleagues, (2019) examined how the ability to make fine-grained distinctions between emotional states (i.e. emotion differentiation) was related to the selection and efficacy of emotion regulation (ER) strategies amidst a high-intensity emotional event.
The authors initiated students on a nine day experience sampling method (ESM) regimen two days prior to the students receiving a highly-consequential test result. The student participants completed ten surveys per day for the nine days delivered via an application on their smart phones or via a special mobile phone provided by the experimenters. Since the present study concerns regulation after and emotional event, only data collected after the results were released are analyzed, resulting in 7 days of ESM data.
The data collected in each survey were:
ratings of six discrete negative emotions (sad, angry, disappointed, ashamed, anxious, stressed) on a scale of 1-100 in response to thinking about the participant’s grade on the exam
ratings of the use of six ER strategies (rumination, distraction, reappraisal, expressive suppression, social sharing, acceptance) on a 7-point scale
score on the students’ test, (dichotimized in the analysis into passing and failing groups and then aggregated to create a ‘percentage of exams passed’ variable).
I chose to reproduce this result for two reasons. The first and primary reason is that I am collecting EMA data where I plan to do a similar type of lagged panel model. Reproducing this analysis will provide helpful practice in this type of modeling. The second reason is that I am interested in relationships between emotion differentiation abilities and emotion regulation processes. Thus, the results of this paper are of personal importance to me.
The Github repository containing all the materials for this reproduction can be found here. The original research report PDF can be found here.
Planned sample size and/or termination rule, sampling frame, known demographics if any, preselection rules if any.
The sample for this dataset consisted of first-year Belgian undergraduate students. The planned N was 100 participants which would have 80% power to detect a medium effect (r = .30, α = .05). The final N for the study was 101 participants (14 males; age: M = 18.64, SD = 1.45). Participans were recruited through a university research participation program and through social media. The authors had a predetermined rule for omission of participants with less than 50% completion, but no participants met this criterion so all were included in analysis. Participants were awarded compensation on a basis commensurate with their study completion percentage.
All materials - can quote directly from original article - just put the text in quotations and note that this was followed precisely. Or, quote directly and just point out exceptions to what was described in the original article.
The materials for the study are taken directly from the original paper and copied below:
Negative emotion.
Six emotions (sad, angry, disappointed, ashamed, anxious, stressed) were assessed on a 100-point scale (1 = not at all, 100 = very much). The item stem was “When you think about your grades right now, how [emotion] are you feeling?” (RKF = .99, RC = .74). In this study, we updated this measure to include emotions relevant to the context of receiving learning outcomes (Pekrun, 2006). We kept “sad,” “angry,” “anxious,” and “stressed” from Study 1, as the former three are also learning-related emotions (Pekrun, 2006), and continuity across studies allowed for comparison. However, differentiation should replicate across the inclusion of different emotions if each of the emotions provides new information. We added “disappointed” and “ashamed” because of their centrality in retrospectively evaluating learning outcomes (Pekrun, 2006).
Negative-emotion differentiation.
As in Study 1, we took the ICC between negative emotions within-person across measurement occasions, applied a Fisher’s z transformation, and then reverse scored it so higher numbers equaled higher differentiation. There were no negative ICCs.
Emotion-regulation strategies.
We assessed six strategies on a 7-point scale (0 = not at all, 6 = very much). The item stem was “Since the last beep, have you . . .” Five strategies from Study 1 were reworded to assess grade-relevant regulation: rumination (“ruminated about your grades?”), distraction (“distracted yourself from your grades and the associated emotions?”), reappraisal (“looked at your grades or the emotions that go with them from another perspective?”), expressive suppression (“suppressed the outward expression of your emotions about your grades?”), and social sharing (“talked to others about your grades and the associated emotions?”). We also included acceptance (“accepted your emotions about your grades the way they are?”).
Percentage passed.
For each subject, participants reported scores out of 20, with 10 and above being a passing grade and below 10 a failing grade. Failing requires retaking the exam later in the year or, in the case of too many failures, termination of enrollment. Given the clear emotional line at passing, we dichotomized scores on each subject as fail (1–9) or pass (10–20) and calculated the percentage of subjects passed across all subjects taken. This percentage variable was highly correlated with mean score out of 20 across exams (r = .90), and we found no differences in reported results when using mean score instead of percentage passed. In the baseline survey, we assessed participants’ expectations about their upcoming exam grades using the same measure. We used this to compute an expected-percentage-passed variable. Including both expected and actual passing percentage, or the difference between actual and expected passing percentage, did not substantively change our results. Thus, we focus on actual passing percentage.
Can quote directly from original article - just put the text in quotations and note that this was followed precisely. Or, quote directly and just point out exceptions to what was described in the original article.
The procedure for data collection is copied from the original article below:
Three days before receiving results, participants came to a lab session where they were trained on the ESM protocol. Participants were told that the study was about emotions and exams but were not given details about specific hypotheses. They then completed the ESM phase. On results-release day, within a 2-hr period, students were notified by e-mail that results were available in an online portal and asked to check them immediately. On this day, participants were sent a link to an online survey asking them to report their grade for each subject. For the ESM protocol, participants with a compatible personal Android phone installed mobileQ (N = 28). Other participants were given a research-only smartphone (N = 73). Participants completed 9 consecutive days of experience sampling: 2 days before the results release and 7 days after. We used a stratified random-interval scheme that sent a random signal within 10 equal intervals between 10:00 a.m. and 10:00 p.m. There was some variability in when results were released: Participants received their results between surveys 21 and 28 of 90. We were interested in regulation in response to results, and thus we included only post-results surveys, meaning that participants received between 63 and 70 surveys (M = 68.69). Participants received a signal on average every 71.9 min (SD = 29.8) and completed an average of 90.5% of signals (SD = 7.8%).
Can also quote directly, though it is less often spelled out effectively for an analysis strategy section. The key is to report an analysis strategy that is as close to the original - data cleaning rules, data exclusion rules, covariates, etc. - as possible.
Clarify key analysis of interest here You can also pre-specify additional analyses you plan to do.
The data analysis conducted in the original study consisted of two model structures built separately with each of the six emotion regulation strategies, resulting in twelve total models. The description of the data analytic strategy from the original article is copied below:
Data-analytic strategy
As in Study 1, we used lme4 (Bates et al., 2015) to fit mixed-effects models and standardized variables for analyses. We ran two-level models, with measurement occasions (N = 6,282) nested within persons (N = 101). In these models, we included percentage pass as a proxy for the emotional intensity of the stimulus. However, because we did not have the necessary statistical power, we did not estimate a three-way interaction with this variable. Strategies and negative emotion were measured at the occasion level, and differentiation and percentage passed at the person level. We found no substantive differences in either model when person-level negative emotion was included, but we included this variable in Model 1 to replicate Study 1.
Model 1: emotion differentiation as a predictor of emotion-regulation strategies.
In Model 1, we used differentiation, percentage passed, and negative emotion, which were grand-mean centered, to predict each strategy separately (six models). We included random intercepts per participant.
Model 2: Emotion Differentiation × Emotion Regulation Strategies predicting negative emotion.
In Model 2, we used differentiation, regulation, their cross-level interaction, and percentage passed to predict negative emotion (separately for each strategy; six models). We included lagged negative emotion (at the previous time point) to model emotional change, again excluding overnight lags. We person-mean-centered regulation and lagged emotion and grand-mean-centered differentiation and percentage passed. We included random intercepts per participant. For each participant, we included random slopes for regulation and lagged emotion, and we allowed these slopes to covary. There was one exception to this strategy: The acceptance model would not converge until we removed the random slope for acceptance, so we report this model with this random slope omitted.
Explicitly describe known differences in sample, setting, procedure, and analysis plan from original study. The goal, of course, is to minimize those differences, but differences will inevitably occur. Also, note whether such differences are anticipated to make a difference based on claims in the original article or subsequent published research on the conditions for obtaining the effect.
Since this is an analysis reproduction, the data will be exactly the same. I plan to run this analysis using lme4 just as the authors did. I will also conduct the same analysis using the brms Bayesian Regression package in R so I can compare the credible intervals and expected values of the posterior distributions with the confidence intervals and point estimates given by the lme4 linear mixed model framework.
Data preparation following the analysis plan.
# rm(list = ls())
library(tidyverse)
library(haven)
library(psych)
library(here)
library(broom.mixed)
library(glue)
library(lmerTest)
library(riclpmr)
library(lavaan)
theme_set(theme_minimal())
df_raw <- read_sav(here("data/Study 2_exam.sav"))
neg_emo_cols <- c("emotion_sad", "emotion_angry", "emotion_disapp",
"emotion_ashamed", "emotion_anxious", "emotion_stressed")
pos_emo_cols <- c("emotion_proud", "emotion_happy", "emotion_content", "emotion_relief")
er_strategy_cols <- c("ER_acceptance", "ER_rumination", "ER_reapp",
"ER_supp", "ER_soc_sharing", "ER_distraction")
Exclude all emotion reports from prior to the release of exam results
df_raw_post_exam <- df_raw %>%
# ---- Keep only reports after exam results were received ---- #
filter(exam_beepnum >= 0)
Here I am creating useful columns. Most of these are either grand-mean scaled (varname_sc), grand-mean centered (varname_c), person-mean scaled (varname_psc), or lagged (varname_lag).
df_clean <- df_raw_post_exam %>%
# ---- Create pos/neg emotion intensity composite scores ---- #
mutate(
neg_emo_comp = rowMeans(df_raw_post_exam[,neg_emo_cols], na.rm = T),
pos_emo_comp = rowMeans(df_raw_post_exam[,pos_emo_cols], na.rm = T),
perc_pass = perc_pass*100
) %>%
# ---- Create person level mean negative emotion score (covariate in models) ---- #
group_by(Participant) %>%
mutate(person_neg_emo_mean = mean(neg_emo_comp, na.rm = T)) %>%
ungroup %>%
# ---- Create lagged emotion and strategy vars ---- #
group_by(Participant, beepday) %>%
mutate_at(vars(matches("emo|^ER_")),
list(lag = ~ lag(.))) %>%
ungroup %>%
# ---- Person scale emotion and strategy vars ---- #
group_by(Participant) %>%
mutate_at(vars(matches("emo|^ER_"), -matches("_sc$|_c$|_psc$|_pc$")),
list(pc = ~scale(., T, F))) %>%
mutate_at(vars(matches("emo|^ER_"), -matches("_sc$|_c$|_psc$|_pc$")),
list(psc = ~scale(.))) %>%
ungroup %>%
# ---- Create grand mean centered emotion and strategy vars ---- #
mutate_at(vars(matches("emo|^ER_|^perc_pass$"), -matches("_sc$|_c$")),
list(c = ~scale(., center=T, scale=F))) %>%
# ---- Create grand mean scaled emotion and strategy vars ---- #
mutate_at(vars(matches("emo|^ER_|^perc_pass$"), -matches("_sc$|_c$")),
list(sc = ~scale(.)))
compute_icc <- function(dat) {
dat %>%
select(neg_emo_cols) %>%
irr::icc(model="twoway", unit = "average") %>%
unlist %>%
t %>%
as_tibble %>%
select(
stimuli = subjects,
raters,
icc = value,
lbound,
ubound
) %>%
mutate_at(vars(stimuli, raters), as.integer) %>%
mutate_at(vars(icc:ubound), as.numeric)
}
icc_df <- df_clean %>%
group_by(Participant) %>%
nest() %>%
mutate(icc = map(data, compute_icc)) %>%
unnest(icc) %>%
select(-data) %>%
ungroup %>%
mutate(
icc_fz = fisherz(icc),
ed_score = icc_fz*-1,
ed_score_class = case_when(
ed_score > (mean(ed_score, na.rm = T) + sd(ed_score)) ~ "> +1 SD",
ed_score < (mean(ed_score, na.rm = T) - sd(ed_score)) ~ "< -1 SD"
)
)
df <- icc_df %>%
select(Participant, icc, ed_score, ed_score_class) %>%
right_join(df_clean, by="Participant") %>%
mutate(ed_score_sc = scale(ed_score))
Since the smartphone app requires the user to fill out all responses to continue, any incomplete regulation strategy indicates an incomplete response.
df <- df %>%
filter(!is.na(ER_reapp))
df %>%
group_by(Participant) %>%
summarize(proportion_complete = sum(!is.na(ExecutionTime))/90) %>%
ungroup %>%
ggplot(aes(x = proportion_complete)) +
geom_histogram(binwidth = .02) +
# geom_density() +
labs(x = "Proportion complete", x = "Number of participants")
df_emotion_long <- df %>%
group_by(Participant, perc_pass) %>%
summarize_at(vars(starts_with("emotion_"),
-matches("_c$|_sc$|_psc$|_lag|_pc$")),
mean, na.rm=T) %>%
pivot_longer(cols = c(-Participant, -perc_pass),
names_to = "emotion_type", values_to = "emotion_rating") %>%
mutate(emotion_type = str_replace_all(emotion_type, "emotion_", ""),
emotion_type = reorder(emotion_type, emotion_rating, mean)) %>%
left_join(select(icc_df, Participant, ed_score)) %>%
mutate(ed_score_class = case_when(
ed_score > (mean(.$ed_score, na.rm = T) + sd(.$ed_score)) ~ "> +1 SD",
ed_score < (mean(.$ed_score, na.rm = T) - sd(.$ed_score)) ~ "< -1 SD",
TRUE ~ "</> 1 SD"
))
## Joining, by = "Participant"
df %>%
group_by(Participant) %>%
summarize_at(vars(starts_with("emotion_"),
-matches("_c$|_sc$|_psc$|_lag|_pc")),
mean, na.rm=T) %>%
pivot_longer(cols = -Participant, names_to = "emotion_type", values_to = "emotion_rating") %>%
mutate(emotion_type = str_replace_all(emotion_type, "emotion_", ""),
emotion_type = reorder(emotion_type, emotion_rating, mean)) %>%
ggplot(aes(x = emotion_type, y = emotion_rating)) +
geom_bar(stat="summary") +
theme(axis.text.x = element_text(angle=90)) +
labs(x = "Emotion type", y = "Mean emotion intensity")
## No summary function supplied, defaulting to `mean_se()
Each participant’s mean is plotted with a black point and 95% CI overlaying the raw data in color. The mean of participant means is represented with a dotted line.
mean_neg_emo <- df %>%
group_by(Participant) %>%
summarize(mean = mean(neg_emo_comp, na.rm = T)) %>%
summarize(mean = mean(mean)) %>%
unlist
df %>%
ggplot(aes(x = factor(Participant), y = neg_emo_comp, color = factor(Participant))) +
geom_point(position = position_jitter(0), alpha = .4, size = .6) +
stat_summary(fun.data = "mean_cl_boot", size = .2, color = "black") +
geom_hline(yintercept = mean_neg_emo, linetype = "dotted") +
theme(legend.position = "none", axis.text.x=element_blank()) +
labs(x = "Participant", y = "Negative emotion")
df %>%
filter(exam_beepnum >= 0) %>%
group_by(Participant) %>%
ggplot(aes(x = exam_beepnum, y = neg_emo_comp)) +
geom_jitter(alpha = .2, size = .4) +
geom_smooth() +
geom_smooth(method="lm", linetype="dotted") +
labs(x = "EMA beep number", y = "Negative emotion mean")
## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")'
perc_pass_summary <- df %>%
group_by(Participant) %>%
summarize(perc_pass = mean(perc_pass, na.rm = T)) %>%
summarize(mean = mean(perc_pass, na.rm = T),
sd = sd(perc_pass, na.rm = T))
df %>%
group_by(Participant, exam_beepnum) %>%
summarize_at(vars(starts_with("emotion_"),
-matches("_sc$|_c$|_psc$|_pc$|_lag$")),
mean, na.rm=T) %>%
pivot_longer(cols = c(-Participant, -exam_beepnum),
names_to = "emotion_type",
values_to = "emotion_rating") %>%
mutate(emotion_type = str_replace_all(emotion_type, "emotion_", ""),
emotion_type = reorder(emotion_type, emotion_rating, mean)) %>%
ggplot(aes(x = exam_beepnum, y = emotion_rating, color = emotion_type)) +
geom_jitter(alpha = .2, size = .1) +
geom_smooth(se=F) +
labs(x = "Ping number relative to exam result", y = "Emotion intensity", color = "")
## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")'
df %>%
group_by(Participant, exam_beepnum) %>%
summarize_at(vars(starts_with("emotion_"), -matches("_sc$|_c$|_psc$|_pc$|_lag")), mean, na.rm=T) %>%
pivot_longer(cols = c(-Participant, -exam_beepnum), names_to = "emotion_type", values_to = "emotion_rating") %>%
mutate(emotion_type = str_replace_all(emotion_type, "emotion_", ""),
emotion_type = reorder(emotion_type, emotion_rating, mean)) %>%
ggplot(aes(x = exam_beepnum, y = emotion_rating, color = emotion_type)) +
facet_grid(emotion_type ~.) +
geom_jitter(alpha = .1, size = .2) +
geom_smooth() +
theme(axis.text.x = element_text(angle=90)) +
labs(x = "Ping number relative to exam result", y = "Emotion intensity")
## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")'
Note that the variance around negative emotion increases starkly as people performed worse on exams. This could pose a problem for linear modeling.
df %>%
group_by(Participant, perc_pass) %>%
summarize_at(vars(starts_with("emotion_"), -matches("_sc$|_c$|_psc$|_pc$|_lag")), mean, na.rm=T) %>%
pivot_longer(cols = c(-Participant, -perc_pass),
names_to = "emotion_type", values_to = "emotion_rating") %>%
filter(emotion_type %in% neg_emo_cols) %>%
mutate(emotion_type = str_replace_all(emotion_type, "emotion_", ""),
emotion_type = reorder(emotion_type, emotion_rating, mean)) %>%
summarize(mean_neg_emo = mean(emotion_rating),
perc_pass = perc_pass[1]) %>%
ggplot(aes(x = perc_pass, y = mean_neg_emo)) +
geom_jitter(alpha = .2) +
geom_smooth() +
labs(x = "Percentage of exams passed", y = "Mean negative emotion")
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
Acceptance has a way outsized share of strategy usage. It is difficult to measure acceptance in this context since it can easily be misintepreted as accepting your exam scores rather than accepting negative emotions. In a colloquial sense, one might be more accustomed to speaking about how much they accept a situation, rather than how much they accept concomitant emotional sensation of a situation. This confusion is less of a concern for other strategies where the colloquial interpretation more closely aligns with the emotion regulation construct.
df %>%
group_by(Participant) %>%
summarize_at(vars(starts_with("ER_"),
-matches("_sc$|_c$|_psc$|_pc$|_lag$")),
mean, na.rm=T) %>%
pivot_longer(cols = -Participant,
names_to = "strategy_type", values_to = "strategy_rating") %>%
mutate(strategy_type = str_replace_all(strategy_type, "ER_", ""),
strategy_type = reorder(strategy_type, strategy_rating, mean)) %>%
mutate(strategy_type=recode(strategy_type,
"rumination" = "Rumination",
"reapp" = "Reappraisal",
"soc_sharing" = "Social sharing",
"acceptance" = "Acceptance",
"distraction" = "Distraction",
"supp" = "Suppression"
)) %>%
ggplot(aes(x = strategy_type, y = strategy_rating)) +
stat_summary(fun.data = "mean_cl_boot", size = .4) +
geom_jitter(alpha = .2, size = .5) +
theme(axis.text.x = element_text(angle=90)) +
scale_y_continuous(limits = c(0,6)) +
labs(x = "",
y = "Mean strategy use")
## Warning: Removed 20 rows containing missing values (geom_point).
df %>%
pivot_longer(cols = er_strategy_cols,
names_to = "strategy_type",
values_to = "strategy_rating") %>%
group_by(Participant, strategy_type, ed_score) %>%
summarize(strategy_usage_mean = mean(strategy_rating, na.rm = T)) %>%
ggplot(aes(x = ed_score, y = strategy_usage_mean, color = strategy_type)) +
# facet_grid(strategy_type~.) +
geom_jitter(alpha = .2) +
geom_smooth(method = "lm") +
geom_smooth(se=F, linetype = "dashed", size = .3) +
labs(x = "Emotion differentiation score",
y = "Mean strategy use",
color = "")
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
df %>%
pivot_longer(cols = er_strategy_cols,
names_to = "strategy_type",
values_to = "strategy_rating") %>%
mutate(strategy_type=recode(strategy_type,
"ER_rumination" = "Rumination",
"ER_reapp" = "Reappraisal",
"ER_soc_sharing" = "Social sharing",
"ER_acceptance" = "Acceptance",
"ER_distraction" = "Distraction",
"ER_supp" = "Suppression"
)) %>%
ggplot(aes(x = neg_emo_comp_lag, y = strategy_rating, color = strategy_type)) +
# facet_grid(strategy_type~.) +
geom_jitter(alpha = .1, size = .5) +
geom_smooth(method = "lm") +
geom_smooth(se=F, linetype = "dashed", size = .3) +
labs(x = "Emotion intensity at time t-1", y = "Strategy use at time t", color = "")
## Warning: Removed 6402 rows containing non-finite values (stat_smooth).
## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")'
## Warning: Removed 6402 rows containing non-finite values (stat_smooth).
## Warning: Removed 6402 rows containing missing values (geom_point).
The analyses as specified in the analysis plan.
Function for formatting output table
table_out <- function(result_df){
result_df %>%
filter(effect == "fixed") %>%
mutate(ci = glue("[{round(estimate-(std.error*1.96),2)}, {round(estimate+(std.error*1.96),2)}]")) %>%
select(-df, -effect, -group, -statistic) %>%
mutate_if(is.numeric, round, digits = 3) %>%
mutate(term = str_replace(term, "ed_score", "Differentiation")) %>%
mutate(term = str_replace(term, "_pc", "")) %>%
mutate(term = str_replace(term, "perc_pass", "Percentage passed")) %>%
mutate(term = str_replace(term, "person_neg_.*", "Negative emotion")) %>%
mutate(term = str_replace(term, ".*emo.*lag.*", "Lagged emotion")) %>%
mutate(term = str_replace(term, "ER_", "")) %>%
mutate(term = str_replace(term, ":", " x ")) %>%
mutate(term = str_replace(term, "_sc", "")) %>%
mutate(ordering = case_when(
strategy == "rumination" ~ "1",
strategy == "distraction" ~ "2",
strategy == "reapp" ~ "3",
strategy == "acceptance" ~ "4",
strategy == "supp" ~ "5",
strategy == "soc_sharing" ~ "6"
)) %>%
arrange(ordering) %>%
select(-ordering)
}
Function to format output of model building functions Creates a dataframe of model estimates and a label for which strategy is being tested
format_model <- function(x){
x[[2]] %>%
broom.mixed::tidy() %>%
mutate(strategy = x[[1]],
strategy = str_replace(strategy, "ER_|_pc_sc", "")) %>%
select(strategy, everything())
}
build_model1 <- function(x){
model_str_eval <- glue("
lmer({x} ~ ed_score_sc +
perc_pass_sc +
person_neg_emo_mean_sc +
(1 | Participant), df)")
strat <- str_replace_all(x, "_sc|ER_", "")
list(strat, eval(parse(text = model_str_eval)))
}
result_1 <- map(paste0(er_strategy_cols, "_sc"), build_model1)
result_1_df <- map(result_1, format_model) %>%
bind_rows
# assign models to environment variables, becomes useful when creating the figure reproduction
map(result_1, ~assign(x = paste0(.[[1]], "_mod1"), # var name: "strategy_mod1"
value = .[[2]],
pos = 1)) # global environment
## [[1]]
## Linear mixed model fit by REML ['lmerModLmerTest']
## Formula:
## ER_acceptance_sc ~ ed_score_sc + perc_pass_sc + person_neg_emo_mean_sc +
## (1 | Participant)
## Data: df
## REML criterion at convergence: 12271.93
## Random effects:
## Groups Name Std.Dev.
## Participant (Intercept) 0.6937
## Residual 0.6200
## Number of obs: 6282, groups: Participant, 101
## Fixed Effects:
## (Intercept) ed_score_sc perc_pass_sc
## 0.003036 0.098941 -0.112902
## person_neg_emo_mean_sc
## -0.428343
##
## [[2]]
## Linear mixed model fit by REML ['lmerModLmerTest']
## Formula:
## ER_rumination_sc ~ ed_score_sc + perc_pass_sc + person_neg_emo_mean_sc +
## (1 | Participant)
## Data: df
## REML criterion at convergence: 14794.77
## Random effects:
## Groups Name Std.Dev.
## Participant (Intercept) 0.5422
## Residual 0.7632
## Number of obs: 6282, groups: Participant, 101
## Fixed Effects:
## (Intercept) ed_score_sc perc_pass_sc
## 0.002225 -0.133323 0.021703
## person_neg_emo_mean_sc
## 0.349936
##
## [[3]]
## Linear mixed model fit by REML ['lmerModLmerTest']
## Formula:
## ER_reapp_sc ~ ed_score_sc + perc_pass_sc + person_neg_emo_mean_sc +
## (1 | Participant)
## Data: df
## REML criterion at convergence: 14167.88
## Random effects:
## Groups Name Std.Dev.
## Participant (Intercept) 0.6576
## Residual 0.7234
## Number of obs: 6282, groups: Participant, 101
## Fixed Effects:
## (Intercept) ed_score_sc perc_pass_sc
## -0.004032 -0.071000 -0.047520
## person_neg_emo_mean_sc
## 0.165692
##
## [[4]]
## Linear mixed model fit by REML ['lmerModLmerTest']
## Formula:
## ER_supp_sc ~ ed_score_sc + perc_pass_sc + person_neg_emo_mean_sc +
## (1 | Participant)
## Data: df
## REML criterion at convergence: 12789.4
## Random effects:
## Groups Name Std.Dev.
## Participant (Intercept) 0.6817
## Residual 0.6467
## Number of obs: 6282, groups: Participant, 101
## Fixed Effects:
## (Intercept) ed_score_sc perc_pass_sc
## 0.002128 -0.162534 0.047334
## person_neg_emo_mean_sc
## 0.342740
##
## [[5]]
## Linear mixed model fit by REML ['lmerModLmerTest']
## Formula:
## ER_soc_sharing_sc ~ ed_score_sc + perc_pass_sc + person_neg_emo_mean_sc +
## (1 | Participant)
## Data: df
## REML criterion at convergence: 16613.64
## Random effects:
## Groups Name Std.Dev.
## Participant (Intercept) 0.4346
## Residual 0.8870
## Number of obs: 6282, groups: Participant, 101
## Fixed Effects:
## (Intercept) ed_score_sc perc_pass_sc
## -0.002369 -0.093564 0.119003
## person_neg_emo_mean_sc
## 0.181460
##
## [[6]]
## Linear mixed model fit by REML ['lmerModLmerTest']
## Formula:
## ER_distraction_sc ~ ed_score_sc + perc_pass_sc + person_neg_emo_mean_sc +
## (1 | Participant)
## Data: df
## REML criterion at convergence: 11693.27
## Random effects:
## Groups Name Std.Dev.
## Participant (Intercept) 0.8038
## Residual 0.5903
## Number of obs: 6282, groups: Participant, 101
## Fixed Effects:
## (Intercept) ed_score_sc perc_pass_sc
## -0.006892 0.042280 -0.069040
## person_neg_emo_mean_sc
## 0.071282
Model output
result_1_df %>% sjPlot::tab_df(digits = 3)
| strategy | effect | group | term | estimate | std.error | statistic | df | p.value |
|---|---|---|---|---|---|---|---|---|
| acceptance | fixed | NA | (Intercept) | 0.003 | 0.069 | 0.044 | 97.058 | 0.965 |
| acceptance | fixed | NA | ed_score_sc | 0.099 | 0.071 | 1.403 | 97.057 | 0.164 |
| acceptance | fixed | NA | perc_pass_sc | -0.113 | 0.089 | -1.270 | 97.061 | 0.207 |
| acceptance | fixed | NA | person_neg_emo_mean_sc | -0.428 | 0.088 | -4.857 | 97.097 | 0.000 |
| acceptance | ran_pars | Participant | sd__(Intercept) | 0.694 | NA | NA | NA | NA |
| acceptance | ran_pars | Residual | sd__Observation | 0.620 | NA | NA | NA | NA |
| rumination | fixed | NA | (Intercept) | 0.002 | 0.055 | 0.041 | 97.016 | 0.968 |
| rumination | fixed | NA | ed_score_sc | -0.133 | 0.056 | -2.396 | 97.013 | 0.018 |
| rumination | fixed | NA | perc_pass_sc | 0.022 | 0.070 | 0.309 | 97.023 | 0.758 |
| rumination | fixed | NA | person_neg_emo_mean_sc | 0.350 | 0.070 | 5.029 | 97.110 | 0.000 |
| rumination | ran_pars | Participant | sd__(Intercept) | 0.542 | NA | NA | NA | NA |
| rumination | ran_pars | Residual | sd__Observation | 0.763 | NA | NA | NA | NA |
| reapp | fixed | NA | (Intercept) | -0.004 | 0.066 | -0.061 | 97.140 | 0.951 |
| reapp | fixed | NA | ed_score_sc | -0.071 | 0.067 | -1.059 | 97.139 | 0.292 |
| reapp | fixed | NA | perc_pass_sc | -0.048 | 0.085 | -0.562 | 97.145 | 0.575 |
| reapp | fixed | NA | person_neg_emo_mean_sc | 0.166 | 0.084 | 1.975 | 97.199 | 0.051 |
| reapp | ran_pars | Participant | sd__(Intercept) | 0.658 | NA | NA | NA | NA |
| reapp | ran_pars | Residual | sd__Observation | 0.723 | NA | NA | NA | NA |
| supp | fixed | NA | (Intercept) | 0.002 | 0.068 | 0.031 | 97.044 | 0.975 |
| supp | fixed | NA | ed_score_sc | -0.163 | 0.069 | -2.344 | 97.043 | 0.021 |
| supp | fixed | NA | perc_pass_sc | 0.047 | 0.087 | 0.541 | 97.048 | 0.590 |
| supp | fixed | NA | person_neg_emo_mean_sc | 0.343 | 0.087 | 3.952 | 97.088 | 0.000 |
| supp | ran_pars | Participant | sd__(Intercept) | 0.682 | NA | NA | NA | NA |
| supp | ran_pars | Residual | sd__Observation | 0.647 | NA | NA | NA | NA |
| soc_sharing | fixed | NA | (Intercept) | -0.002 | 0.045 | -0.053 | 97.280 | 0.958 |
| soc_sharing | fixed | NA | ed_score_sc | -0.094 | 0.045 | -2.063 | 97.273 | 0.042 |
| soc_sharing | fixed | NA | perc_pass_sc | 0.119 | 0.057 | 2.081 | 97.293 | 0.040 |
| soc_sharing | fixed | NA | person_neg_emo_mean_sc | 0.181 | 0.057 | 3.198 | 97.466 | 0.002 |
| soc_sharing | ran_pars | Participant | sd__(Intercept) | 0.435 | NA | NA | NA | NA |
| soc_sharing | ran_pars | Residual | sd__Observation | 0.887 | NA | NA | NA | NA |
| distraction | fixed | NA | (Intercept) | -0.007 | 0.080 | -0.086 | 97.056 | 0.932 |
| distraction | fixed | NA | ed_score_sc | 0.042 | 0.082 | 0.519 | 97.055 | 0.605 |
| distraction | fixed | NA | perc_pass_sc | -0.069 | 0.103 | -0.672 | 97.058 | 0.503 |
| distraction | fixed | NA | person_neg_emo_mean_sc | 0.071 | 0.102 | 0.699 | 97.082 | 0.486 |
| distraction | ran_pars | Participant | sd__(Intercept) | 0.804 | NA | NA | NA | NA |
| distraction | ran_pars | Residual | sd__Observation | 0.590 | NA | NA | NA | NA |
Write results to disk
result_1_out <- result_1_df %>%
table_out
write_csv(result_1_out, here("writeup/model1_table.csv"))
build_model2 <- function(x){
model_str_eval <- glue("
lmer(neg_emo_comp_sc ~ ed_score_sc*{x} +
perc_pass_sc + neg_emo_comp_lag_pc_sc +
({x} + neg_emo_comp_lag_pc_sc | Participant), df)")
strat <- str_replace_all(x, "_pc|_sc|ER_", "")
list(strat, eval(parse(text = model_str_eval)))
}
result_2 <- map(paste0(er_strategy_cols, "_pc_sc"), build_model2)
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl =
## control$checkConv, : Model failed to converge with max|grad| = 0.00907095
## (tol = 0.002, component 1)
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl =
## control$checkConv, : Model failed to converge with max|grad| = 0.00264685
## (tol = 0.002, component 1)
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl =
## control$checkConv, : Model failed to converge with max|grad| = 0.00335315
## (tol = 0.002, component 1)
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl =
## control$checkConv, : Model failed to converge with max|grad| = 0.00429511
## (tol = 0.002, component 1)
## Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl =
## control$checkConv, : Model failed to converge with max|grad| = 0.0029047
## (tol = 0.002, component 1)
result_2_df <- map(result_2, format_model) %>%
bind_rows
# assign models to variables
map(result_2, ~assign(x = paste0(.[[1]], "_mod2"), # var name: strategyvar_mod2
value = .[[2]],
pos = 1)) # global environment
## [[1]]
## Linear mixed model fit by REML ['lmerModLmerTest']
## Formula:
## neg_emo_comp_sc ~ ed_score_sc * ER_acceptance_pc_sc + perc_pass_sc +
## neg_emo_comp_lag_pc_sc + (ER_acceptance_pc_sc + neg_emo_comp_lag_pc_sc |
## Participant)
## Data: df
## REML criterion at convergence: 3405.194
## Random effects:
## Groups Name Std.Dev. Corr
## Participant (Intercept) 0.74707
## ER_acceptance_pc_sc 0.08932 -0.08
## neg_emo_comp_lag_pc_sc 0.07642 -0.27 0.45
## Residual 0.30974
## Number of obs: 5215, groups: Participant, 101
## Fixed Effects:
## (Intercept) ed_score_sc
## -0.004005 -0.044100
## ER_acceptance_pc_sc perc_pass_sc
## -0.037240 -0.578213
## neg_emo_comp_lag_pc_sc ed_score_sc:ER_acceptance_pc_sc
## 0.159014 0.031372
## convergence code 0; 1 optimizer warnings; 0 lme4 warnings
##
## [[2]]
## Linear mixed model fit by REML ['lmerModLmerTest']
## Formula:
## neg_emo_comp_sc ~ ed_score_sc * ER_rumination_pc_sc + perc_pass_sc +
## neg_emo_comp_lag_pc_sc + (ER_rumination_pc_sc + neg_emo_comp_lag_pc_sc |
## Participant)
## Data: df
## REML criterion at convergence: 3249.708
## Random effects:
## Groups Name Std.Dev. Corr
## Participant (Intercept) 0.74596
## ER_rumination_pc_sc 0.08284 -0.09
## neg_emo_comp_lag_pc_sc 0.07437 -0.14 -0.19
## Residual 0.30584
## Number of obs: 5215, groups: Participant, 101
## Fixed Effects:
## (Intercept) ed_score_sc
## -3.988e-05 -2.151e-02
## ER_rumination_pc_sc perc_pass_sc
## 8.276e-02 -5.827e-01
## neg_emo_comp_lag_pc_sc ed_score_sc:ER_rumination_pc_sc
## 1.419e-01 -2.483e-02
## convergence code 0; 1 optimizer warnings; 0 lme4 warnings
##
## [[3]]
## Linear mixed model fit by REML ['lmerModLmerTest']
## Formula: neg_emo_comp_sc ~ ed_score_sc * ER_reapp_pc_sc + perc_pass_sc +
## neg_emo_comp_lag_pc_sc + (ER_reapp_pc_sc + neg_emo_comp_lag_pc_sc |
## Participant)
## Data: df
## REML criterion at convergence: 3581.976
## Random effects:
## Groups Name Std.Dev. Corr
## Participant (Intercept) 0.74657
## ER_reapp_pc_sc 0.07206 -0.18
## neg_emo_comp_lag_pc_sc 0.07472 -0.18 -0.15
## Residual 0.31645
## Number of obs: 5215, groups: Participant, 101
## Fixed Effects:
## (Intercept) ed_score_sc
## -0.003274 -0.029728
## ER_reapp_pc_sc perc_pass_sc
## 0.024061 -0.577715
## neg_emo_comp_lag_pc_sc ed_score_sc:ER_reapp_pc_sc
## 0.161618 -0.016716
## convergence code 0; 1 optimizer warnings; 0 lme4 warnings
##
## [[4]]
## Linear mixed model fit by REML ['lmerModLmerTest']
## Formula: neg_emo_comp_sc ~ ed_score_sc * ER_supp_pc_sc + perc_pass_sc +
## neg_emo_comp_lag_pc_sc + (ER_supp_pc_sc + neg_emo_comp_lag_pc_sc |
## Participant)
## Data: df
## REML criterion at convergence: 3485.752
## Random effects:
## Groups Name Std.Dev. Corr
## Participant (Intercept) 0.74687
## ER_supp_pc_sc 0.06417 -0.17
## neg_emo_comp_lag_pc_sc 0.07377 -0.21 0.15
## Residual 0.31388
## Number of obs: 5215, groups: Participant, 101
## Fixed Effects:
## (Intercept) ed_score_sc
## -0.002832 -0.033687
## ER_supp_pc_sc perc_pass_sc
## 0.042902 -0.582507
## neg_emo_comp_lag_pc_sc ed_score_sc:ER_supp_pc_sc
## 0.160303 -0.017418
## convergence code 0; 1 optimizer warnings; 0 lme4 warnings
##
## [[5]]
## Linear mixed model fit by REML ['lmerModLmerTest']
## Formula:
## neg_emo_comp_sc ~ ed_score_sc * ER_soc_sharing_pc_sc + perc_pass_sc +
## neg_emo_comp_lag_pc_sc + (ER_soc_sharing_pc_sc + neg_emo_comp_lag_pc_sc |
## Participant)
## Data: df
## REML criterion at convergence: 3544.07
## Random effects:
## Groups Name Std.Dev. Corr
## Participant (Intercept) 0.74696
## ER_soc_sharing_pc_sc 0.06093 -0.09
## neg_emo_comp_lag_pc_sc 0.07187 -0.21 -0.03
## Residual 0.31531
## Number of obs: 5215, groups: Participant, 101
## Fixed Effects:
## (Intercept) ed_score_sc
## -0.002192 -0.031941
## ER_soc_sharing_pc_sc perc_pass_sc
## 0.037273 -0.587319
## neg_emo_comp_lag_pc_sc ed_score_sc:ER_soc_sharing_pc_sc
## 0.155911 -0.030212
##
## [[6]]
## Linear mixed model fit by REML ['lmerModLmerTest']
## Formula:
## neg_emo_comp_sc ~ ed_score_sc * ER_distraction_pc_sc + perc_pass_sc +
## neg_emo_comp_lag_pc_sc + (ER_distraction_pc_sc + neg_emo_comp_lag_pc_sc |
## Participant)
## Data: df
## REML criterion at convergence: 3579.994
## Random effects:
## Groups Name Std.Dev. Corr
## Participant (Intercept) 0.74678
## ER_distraction_pc_sc 0.06897 0.04
## neg_emo_comp_lag_pc_sc 0.07654 -0.20 0.09
## Residual 0.31621
## Number of obs: 5215, groups: Participant, 101
## Fixed Effects:
## (Intercept) ed_score_sc
## -0.003134 -0.035145
## ER_distraction_pc_sc perc_pass_sc
## 0.024850 -0.574860
## neg_emo_comp_lag_pc_sc ed_score_sc:ER_distraction_pc_sc
## 0.161962 -0.025588
## convergence code 0; 1 optimizer warnings; 0 lme4 warnings
Model output
result_2_df %>% sjPlot::tab_df(digits = 3)
| strategy | effect | group | term | estimate | std.error | statistic | df | p.value |
|---|---|---|---|---|---|---|---|---|
| acceptance | fixed | NA | (Intercept) | -0.004 | 0.074 | -0.054 | 97.561 | 0.957 |
| acceptance | fixed | NA | ed_score_sc | -0.044 | 0.074 | -0.597 | 98.154 | 0.552 |
| acceptance | fixed | NA | ER_acceptance_pc_sc | -0.037 | 0.011 | -3.345 | 79.598 | 0.001 |
| acceptance | fixed | NA | perc_pass_sc | -0.578 | 0.074 | -7.819 | 98.587 | 0.000 |
| acceptance | fixed | NA | neg_emo_comp_lag_pc_sc | 0.159 | 0.010 | 15.743 | 96.188 | 0.000 |
| acceptance | fixed | NA | ed_score_sc:ER_acceptance_pc_sc | 0.031 | 0.010 | 3.040 | 74.728 | 0.003 |
| acceptance | ran_pars | Participant | sd__(Intercept) | 0.747 | NA | NA | NA | NA |
| acceptance | ran_pars | Participant | sd__ER_acceptance_pc_sc | 0.089 | NA | NA | NA | NA |
| acceptance | ran_pars | Participant | sd__neg_emo_comp_lag_pc_sc | 0.076 | NA | NA | NA | NA |
| acceptance | ran_pars | Participant | cor__(Intercept).ER_acceptance_pc_sc | -0.084 | NA | NA | NA | NA |
| acceptance | ran_pars | Participant | cor__(Intercept).neg_emo_comp_lag_pc_sc | -0.273 | NA | NA | NA | NA |
| acceptance | ran_pars | Participant | cor__ER_acceptance_pc_sc.neg_emo_comp_lag_pc_sc | 0.445 | NA | NA | NA | NA |
| acceptance | ran_pars | Residual | sd__Observation | 0.310 | NA | NA | NA | NA |
| rumination | fixed | NA | (Intercept) | -0.000 | 0.074 | -0.001 | 97.894 | 1.000 |
| rumination | fixed | NA | ed_score_sc | -0.022 | 0.075 | -0.287 | 98.057 | 0.775 |
| rumination | fixed | NA | ER_rumination_pc_sc | 0.083 | 0.012 | 7.164 | 72.970 | 0.000 |
| rumination | fixed | NA | perc_pass_sc | -0.583 | 0.075 | -7.792 | 98.386 | 0.000 |
| rumination | fixed | NA | neg_emo_comp_lag_pc_sc | 0.142 | 0.010 | 14.057 | 93.577 | 0.000 |
| rumination | fixed | NA | ed_score_sc:ER_rumination_pc_sc | -0.025 | 0.012 | -2.083 | 70.843 | 0.041 |
| rumination | ran_pars | Participant | sd__(Intercept) | 0.746 | NA | NA | NA | NA |
| rumination | ran_pars | Participant | sd__ER_rumination_pc_sc | 0.083 | NA | NA | NA | NA |
| rumination | ran_pars | Participant | sd__neg_emo_comp_lag_pc_sc | 0.074 | NA | NA | NA | NA |
| rumination | ran_pars | Participant | cor__(Intercept).ER_rumination_pc_sc | -0.085 | NA | NA | NA | NA |
| rumination | ran_pars | Participant | cor__(Intercept).neg_emo_comp_lag_pc_sc | -0.142 | NA | NA | NA | NA |
| rumination | ran_pars | Participant | cor__ER_rumination_pc_sc.neg_emo_comp_lag_pc_sc | -0.194 | NA | NA | NA | NA |
| rumination | ran_pars | Residual | sd__Observation | 0.306 | NA | NA | NA | NA |
| reapp | fixed | NA | (Intercept) | -0.003 | 0.074 | -0.044 | 97.845 | 0.965 |
| reapp | fixed | NA | ed_score_sc | -0.030 | 0.075 | -0.397 | 98.183 | 0.692 |
| reapp | fixed | NA | ER_reapp_pc_sc | 0.024 | 0.011 | 2.284 | 58.083 | 0.026 |
| reapp | fixed | NA | perc_pass_sc | -0.578 | 0.074 | -7.786 | 98.682 | 0.000 |
| reapp | fixed | NA | neg_emo_comp_lag_pc_sc | 0.162 | 0.010 | 16.000 | 94.411 | 0.000 |
| reapp | fixed | NA | ed_score_sc:ER_reapp_pc_sc | -0.017 | 0.011 | -1.546 | 66.219 | 0.127 |
| reapp | ran_pars | Participant | sd__(Intercept) | 0.747 | NA | NA | NA | NA |
| reapp | ran_pars | Participant | sd__ER_reapp_pc_sc | 0.072 | NA | NA | NA | NA |
| reapp | ran_pars | Participant | sd__neg_emo_comp_lag_pc_sc | 0.075 | NA | NA | NA | NA |
| reapp | ran_pars | Participant | cor__(Intercept).ER_reapp_pc_sc | -0.183 | NA | NA | NA | NA |
| reapp | ran_pars | Participant | cor__(Intercept).neg_emo_comp_lag_pc_sc | -0.176 | NA | NA | NA | NA |
| reapp | ran_pars | Participant | cor__ER_reapp_pc_sc.neg_emo_comp_lag_pc_sc | -0.150 | NA | NA | NA | NA |
| reapp | ran_pars | Residual | sd__Observation | 0.316 | NA | NA | NA | NA |
| supp | fixed | NA | (Intercept) | -0.003 | 0.074 | -0.038 | 97.742 | 0.970 |
| supp | fixed | NA | ed_score_sc | -0.034 | 0.075 | -0.451 | 98.152 | 0.653 |
| supp | fixed | NA | ER_supp_pc_sc | 0.043 | 0.010 | 4.156 | 52.177 | 0.000 |
| supp | fixed | NA | perc_pass_sc | -0.583 | 0.074 | -7.848 | 98.368 | 0.000 |
| supp | fixed | NA | neg_emo_comp_lag_pc_sc | 0.160 | 0.010 | 16.078 | 93.618 | 0.000 |
| supp | fixed | NA | ed_score_sc:ER_supp_pc_sc | -0.017 | 0.010 | -1.719 | 48.739 | 0.092 |
| supp | ran_pars | Participant | sd__(Intercept) | 0.747 | NA | NA | NA | NA |
| supp | ran_pars | Participant | sd__ER_supp_pc_sc | 0.064 | NA | NA | NA | NA |
| supp | ran_pars | Participant | sd__neg_emo_comp_lag_pc_sc | 0.074 | NA | NA | NA | NA |
| supp | ran_pars | Participant | cor__(Intercept).ER_supp_pc_sc | -0.167 | NA | NA | NA | NA |
| supp | ran_pars | Participant | cor__(Intercept).neg_emo_comp_lag_pc_sc | -0.210 | NA | NA | NA | NA |
| supp | ran_pars | Participant | cor__ER_supp_pc_sc.neg_emo_comp_lag_pc_sc | 0.153 | NA | NA | NA | NA |
| supp | ran_pars | Residual | sd__Observation | 0.314 | NA | NA | NA | NA |
| soc_sharing | fixed | NA | (Intercept) | -0.002 | 0.074 | -0.029 | 97.715 | 0.977 |
| soc_sharing | fixed | NA | ed_score_sc | -0.032 | 0.075 | -0.428 | 98.069 | 0.670 |
| soc_sharing | fixed | NA | ER_soc_sharing_pc_sc | 0.037 | 0.008 | 4.481 | 63.301 | 0.000 |
| soc_sharing | fixed | NA | perc_pass_sc | -0.587 | 0.074 | -7.884 | 98.330 | 0.000 |
| soc_sharing | fixed | NA | neg_emo_comp_lag_pc_sc | 0.156 | 0.010 | 15.782 | 93.756 | 0.000 |
| soc_sharing | fixed | NA | ed_score_sc:ER_soc_sharing_pc_sc | -0.030 | 0.008 | -3.630 | 62.748 | 0.001 |
| soc_sharing | ran_pars | Participant | sd__(Intercept) | 0.747 | NA | NA | NA | NA |
| soc_sharing | ran_pars | Participant | sd__ER_soc_sharing_pc_sc | 0.061 | NA | NA | NA | NA |
| soc_sharing | ran_pars | Participant | sd__neg_emo_comp_lag_pc_sc | 0.072 | NA | NA | NA | NA |
| soc_sharing | ran_pars | Participant | cor__(Intercept).ER_soc_sharing_pc_sc | -0.085 | NA | NA | NA | NA |
| soc_sharing | ran_pars | Participant | cor__(Intercept).neg_emo_comp_lag_pc_sc | -0.208 | NA | NA | NA | NA |
| soc_sharing | ran_pars | Participant | cor__ER_soc_sharing_pc_sc.neg_emo_comp_lag_pc_sc | -0.030 | NA | NA | NA | NA |
| soc_sharing | ran_pars | Residual | sd__Observation | 0.315 | NA | NA | NA | NA |
| distraction | fixed | NA | (Intercept) | -0.003 | 0.074 | -0.042 | 97.767 | 0.967 |
| distraction | fixed | NA | ed_score_sc | -0.035 | 0.075 | -0.470 | 98.100 | 0.639 |
| distraction | fixed | NA | ER_distraction_pc_sc | 0.025 | 0.010 | 2.449 | 53.897 | 0.018 |
| distraction | fixed | NA | perc_pass_sc | -0.575 | 0.075 | -7.698 | 98.323 | 0.000 |
| distraction | fixed | NA | neg_emo_comp_lag_pc_sc | 0.162 | 0.010 | 15.802 | 93.179 | 0.000 |
| distraction | fixed | NA | ed_score_sc:ER_distraction_pc_sc | -0.026 | 0.010 | -2.621 | 56.359 | 0.011 |
| distraction | ran_pars | Participant | sd__(Intercept) | 0.747 | NA | NA | NA | NA |
| distraction | ran_pars | Participant | sd__ER_distraction_pc_sc | 0.069 | NA | NA | NA | NA |
| distraction | ran_pars | Participant | sd__neg_emo_comp_lag_pc_sc | 0.077 | NA | NA | NA | NA |
| distraction | ran_pars | Participant | cor__(Intercept).ER_distraction_pc_sc | 0.039 | NA | NA | NA | NA |
| distraction | ran_pars | Participant | cor__(Intercept).neg_emo_comp_lag_pc_sc | -0.196 | NA | NA | NA | NA |
| distraction | ran_pars | Participant | cor__ER_distraction_pc_sc.neg_emo_comp_lag_pc_sc | 0.093 | NA | NA | NA | NA |
| distraction | ran_pars | Residual | sd__Observation | 0.316 | NA | NA | NA | NA |
Write results to disk
result_2_out <- result_2_df %>%
table_out
write_csv(result_2_out, here("writeup/model2_table.csv"))
Use case 3 calculator at this link: simple slopes calculator.
Acceptance
vcov(acceptance_mod2)
## 6 x 6 Matrix of class "dpoMatrix"
## (Intercept) ed_score_sc
## (Intercept) 5.544966e-03 -7.508224e-06
## ed_score_sc -7.508224e-06 5.465706e-03
## ER_acceptance_pc_sc -5.558357e-05 1.736448e-06
## perc_pass_sc -8.815555e-06 -8.960147e-04
## neg_emo_comp_lag_pc_sc -1.540246e-04 2.917526e-05
## ed_score_sc:ER_acceptance_pc_sc -2.350423e-07 -2.801237e-06
## ER_acceptance_pc_sc perc_pass_sc
## (Intercept) -5.558357e-05 -8.815555e-06
## ed_score_sc 1.736448e-06 -8.960147e-04
## ER_acceptance_pc_sc 1.239370e-04 2.748666e-06
## perc_pass_sc 2.748666e-06 5.468348e-03
## neg_emo_comp_lag_pc_sc 3.487762e-05 3.435392e-05
## ed_score_sc:ER_acceptance_pc_sc 9.669230e-06 -2.427728e-06
## neg_emo_comp_lag_pc_sc
## (Intercept) -1.540246e-04
## ed_score_sc 2.917526e-05
## ER_acceptance_pc_sc 3.487762e-05
## perc_pass_sc 3.435392e-05
## neg_emo_comp_lag_pc_sc 1.020176e-04
## ed_score_sc:ER_acceptance_pc_sc -6.072100e-06
## ed_score_sc:ER_acceptance_pc_sc
## (Intercept) -2.350423e-07
## ed_score_sc -2.801237e-06
## ER_acceptance_pc_sc 9.669230e-06
## perc_pass_sc -2.427728e-06
## neg_emo_comp_lag_pc_sc -6.072100e-06
## ed_score_sc:ER_acceptance_pc_sc 1.065094e-04
coef(acceptance_mod2)$Participant$`(Intercept)` %>% mean
## [1] -0.004004893
coef(acceptance_mod2)$Participant$ER_acceptance_pc_sc %>% mean
## [1] -0.03723986
coef(acceptance_mod2)$Participant$ed_score_sc %>% mean
## [1] -0.04410046
coef(acceptance_mod2)$Participant$`ed_score_sc:ER_acceptance_pc_sc` %>% mean
## [1] 0.03137228
Rumination
vcov(rumination_mod2)
## 6 x 6 Matrix of class "dpoMatrix"
## (Intercept) ed_score_sc
## (Intercept) 5.528244e-03 -8.062004e-06
## ed_score_sc -8.062004e-06 5.630052e-03
## ER_rumination_pc_sc -4.892770e-05 -2.594010e-06
## perc_pass_sc -8.185932e-06 -9.341434e-04
## neg_emo_comp_lag_pc_sc -7.842158e-05 1.446588e-05
## ed_score_sc:ER_rumination_pc_sc 1.473173e-06 -6.561858e-05
## ER_rumination_pc_sc perc_pass_sc
## (Intercept) -4.892770e-05 -8.185932e-06
## ed_score_sc -2.594010e-06 -9.341434e-04
## ER_rumination_pc_sc 1.334803e-04 1.696096e-05
## perc_pass_sc 1.696096e-05 5.592143e-03
## neg_emo_comp_lag_pc_sc -2.258816e-05 1.898533e-05
## ed_score_sc:ER_rumination_pc_sc 1.539404e-05 6.282732e-06
## neg_emo_comp_lag_pc_sc
## (Intercept) -7.842158e-05
## ed_score_sc 1.446588e-05
## ER_rumination_pc_sc -2.258816e-05
## perc_pass_sc 1.898533e-05
## neg_emo_comp_lag_pc_sc 1.019321e-04
## ed_score_sc:ER_rumination_pc_sc 3.892720e-06
## ed_score_sc:ER_rumination_pc_sc
## (Intercept) 1.473173e-06
## ed_score_sc -6.561858e-05
## ER_rumination_pc_sc 1.539404e-05
## perc_pass_sc 6.282732e-06
## neg_emo_comp_lag_pc_sc 3.892720e-06
## ed_score_sc:ER_rumination_pc_sc 1.420868e-04
coef(rumination_mod2)$Participant$`(Intercept)` %>% mean
## [1] -3.988211e-05
coef(rumination_mod2)$Participant$ER_rumination_pc_sc %>% mean
## [1] 0.08276393
coef(rumination_mod2)$Participant$ed_score_sc %>% mean
## [1] -0.02150834
coef(rumination_mod2)$Participant$`ed_score_sc:ER_rumination_pc_sc` %>% mean
## [1] -0.02483308
Social sharing
vcov(soc_sharing_mod2)
## 6 x 6 Matrix of class "dpoMatrix"
## (Intercept) ed_score_sc
## (Intercept) 5.544068e-03 -7.712139e-06
## ed_score_sc -7.712139e-06 5.577862e-03
## ER_soc_sharing_pc_sc -3.680854e-05 8.888426e-07
## perc_pass_sc -9.014704e-06 -9.204938e-04
## neg_emo_comp_lag_pc_sc -1.107746e-04 2.221721e-05
## ed_score_sc:ER_soc_sharing_pc_sc -1.753794e-08 -4.318532e-05
## ER_soc_sharing_pc_sc perc_pass_sc
## (Intercept) -3.680854e-05 -9.014704e-06
## ed_score_sc 8.888426e-07 -9.204938e-04
## ER_soc_sharing_pc_sc 6.919272e-05 -4.707261e-07
## perc_pass_sc -4.707261e-07 5.548934e-03
## neg_emo_comp_lag_pc_sc -6.659705e-06 2.809399e-05
## ed_score_sc:ER_soc_sharing_pc_sc -2.209418e-06 -1.847674e-06
## neg_emo_comp_lag_pc_sc
## (Intercept) -1.107746e-04
## ed_score_sc 2.221721e-05
## ER_soc_sharing_pc_sc -6.659705e-06
## perc_pass_sc 2.809399e-05
## neg_emo_comp_lag_pc_sc 9.759626e-05
## ed_score_sc:ER_soc_sharing_pc_sc 2.362961e-06
## ed_score_sc:ER_soc_sharing_pc_sc
## (Intercept) -1.753794e-08
## ed_score_sc -4.318532e-05
## ER_soc_sharing_pc_sc -2.209418e-06
## perc_pass_sc -1.847674e-06
## neg_emo_comp_lag_pc_sc 2.362961e-06
## ed_score_sc:ER_soc_sharing_pc_sc 6.928746e-05
coef(soc_sharing_mod2)$Participant$`(Intercept)` %>% mean
## [1] -0.002191875
coef(soc_sharing_mod2)$Participant$ER_soc_sharing_pc_sc %>% mean
## [1] 0.03727333
coef(soc_sharing_mod2)$Participant$ed_score_sc %>% mean
## [1] -0.03194101
coef(soc_sharing_mod2)$Participant$`ed_score_sc:ER_soc_sharing_pc_sc` %>% mean
## [1] -0.03021199
Distraction
vcov(distraction_mod2)
## 6 x 6 Matrix of class "dpoMatrix"
## (Intercept) ed_score_sc
## (Intercept) 5.541540e-03 -7.785104e-06
## ed_score_sc -7.785104e-06 5.586502e-03
## ER_distraction_pc_sc 2.109222e-05 8.853737e-07
## perc_pass_sc -9.134247e-06 -9.254912e-04
## neg_emo_comp_lag_pc_sc -1.105841e-04 2.181706e-05
## ed_score_sc:ER_distraction_pc_sc -2.669217e-07 2.510198e-05
## ER_distraction_pc_sc perc_pass_sc
## (Intercept) 2.109222e-05 -9.134247e-06
## ed_score_sc 8.853737e-07 -9.254912e-04
## ER_distraction_pc_sc 1.029447e-04 -2.320608e-06
## perc_pass_sc -2.320608e-06 5.576569e-03
## neg_emo_comp_lag_pc_sc 3.221128e-06 2.592137e-05
## ed_score_sc:ER_distraction_pc_sc -1.377924e-06 -1.563836e-06
## neg_emo_comp_lag_pc_sc
## (Intercept) -1.105841e-04
## ed_score_sc 2.181706e-05
## ER_distraction_pc_sc 3.221128e-06
## perc_pass_sc 2.592137e-05
## neg_emo_comp_lag_pc_sc 1.050474e-04
## ed_score_sc:ER_distraction_pc_sc -7.925052e-09
## ed_score_sc:ER_distraction_pc_sc
## (Intercept) -2.669217e-07
## ed_score_sc 2.510198e-05
## ER_distraction_pc_sc -1.377924e-06
## perc_pass_sc -1.563836e-06
## neg_emo_comp_lag_pc_sc -7.925052e-09
## ed_score_sc:ER_distraction_pc_sc 9.533797e-05
coef(distraction_mod2)$Participant$`(Intercept)` %>% mean
## [1] -0.003134279
coef(distraction_mod2)$Participant$ER_distraction_pc_sc %>% mean
## [1] 0.02484966
coef(distraction_mod2)$Participant$ed_score_sc %>% mean
## [1] -0.03514497
coef(distraction_mod2)$Participant$`ed_score_sc:ER_distraction_pc_sc` %>% mean
## [1] -0.025588
ss_coefs_df <- read_csv(here("data/simple_slopes_coefficients.csv")) %>%
pivot_wider(names_from = "term", values_from = "coefficient")
## Parsed with column specification:
## cols(
## strategy = col_character(),
## term = col_character(),
## sd_group = col_character(),
## coefficient = col_double()
## )
strat_sds <- df %>%
select(which(names(df) %in% ss_coefs_df$strategy)) %>%
pivot_longer(cols = everything(), names_to = "strategy", values_to = "value") %>%
group_by(strategy) %>%
summarize(strat_sd = sd(value)) %>%
ungroup
neg_emo_sd <- sd(df$neg_emo_comp)
neg_emo_mean <- mean(df$neg_emo_comp)
# unstandardize the coefficients and create separate columns for different line segments
ss_coefs_df_unstd <- ss_coefs_df %>%
left_join(strat_sds) %>%
mutate(slope_unstd = slope * (neg_emo_sd/strat_sd),
intercept_unstd = neg_emo_mean + (intercept * neg_emo_sd)) %>%
mutate(strategy_type=recode(strategy,
"ER_rumination_pc" = "Rumination",
"ER_soc_sharing_pc" = "Social sharing",
"ER_acceptance_pc" = "Acceptance",
"ER_distraction_pc" = "Distraction",
)) %>%
mutate(strategy_type = fct_relevel(strategy_type,
"Rumination","Distraction","Acceptance","Social sharing"),
sd_group = dplyr::recode(sd_group,
"hi" = "High ED (+1 SD)",
"lo" = "Low ED (-1 SD)"),
sd_group = fct_relevel(sd_group, "Low ED (-1 SD)"))
## Joining, by = "strategy"
plot_df <- df %>%
select(Participant,
ed_score,
ed_score_class,
matches("ER_.*_pc_sc$"),
-matches("_c$|_psc$|_lag|_pc$"),
neg_emo_comp) %>%
pivot_longer(cols = c(-Participant, -neg_emo_comp, -ed_score, -ed_score_class),
names_to = "strategy_type",
values_to = "strategy_rating") %>%
mutate(
ed_score = scale(ed_score, T, F),
strategy_type = str_replace_all(strategy_type, "ER_", ""),
strategy_type = reorder(strategy_type, strategy_rating, mean)) %>%
mutate(
strategy_type = dplyr::recode(
strategy_type,
"rumination_pc_sc" = "Rumination",
"reapp_pc_sc" = "Reappraisal",
"soc_sharing_pc_sc" = "Social sharing",
"acceptance_pc_sc" = "Acceptance",
"distraction_pc_sc" = "Distraction",
"supp_pc_sc" = "Suppression")
) %>%
mutate(strategy_type = fct_relevel(strategy_type,
"Rumination","Distraction","Acceptance","Social sharing")) %>%
filter(strategy_type %in% c("Rumination","Distraction","Acceptance","Social sharing"))
plot_df %>%
ggplot(aes(x = strategy_rating, y = neg_emo_comp)) +
# individual data points
geom_jitter(aes(color = ed_score), alpha = .5, size = .2) +
# line segments with 3 standard deviations on x axis
geom_segment(aes(
x = 0 - (strat_sd*3),
xend = 0 + (strat_sd*3),
y = intercept_unstd + (slope_unstd * (0 - (strat_sd * 3))),
yend = intercept_unstd + (slope_unstd * (0 + (strat_sd * 3))),
linetype = sd_group
), data = ss_coefs_df_unstd) +
# line segments between 3-6 standard devs on x axis
geom_segment(aes(
x = 0 - (strat_sd*6),
xend = 0 + (strat_sd*6),
y = intercept_unstd + (slope_unstd * (0 - (strat_sd * 6))),
yend = intercept_unstd + (slope_unstd * (0 + (strat_sd * 6)))
), linetype = "dotted", data = ss_coefs_df_unstd) +
# formatting
facet_wrap(~strategy_type) +
coord_cartesian(xlim = c(-5, 5)) +
scale_x_continuous(breaks = seq(-5,5,1)) +
scale_colour_gradient2(low = "red", mid = "lightcyan", high = "blue") +
labs(x = "Strategy usage",
y = "Negative emotion intensity",
color = "Emotion differentiation",
linetype = "") +
theme(panel.spacing = unit(.1, "lines"))
Side-by-side graph with original graph is ideal here
Any follow-up analyses desired (not required).
It’s important to note here that the question asking about reappraisal is retrospective, (since last EMA ping), and the question asking about emotion intensity is current. Thus, reappraisal and negative emotion measured in the same survey are referring to reappraisal at time t-1 and emotion at time t. As a result, when we look at the effect of reappraisal on subsequent emotion, we do not need to use any lagged measurements. When we look at the effect of emotion on subsequent reappraisal usage, we need to use a lagged emotion measurement.
Here I am making a simpler dataframe for modeling temporal relationships. I am also limiting this to only measurements taken on the day the test result was received.
df_mod_long <- df %>%
mutate(t = exam_beepnum+1,
t_num = as.numeric(t)) %>%
filter(t <= 10) %>%
select(Participant, t, t_num, perc_pass,
ER_reapp_sc, ER_reapp_lag_sc,
neg_emo_comp_sc, neg_emo_comp_lag_sc) %>%
mutate(t = paste0("t", t),
# perc_pass = scale(perc_pass),
) %>%
rename("neg" = "neg_emo_comp_sc",
"neg_lag" = "neg_emo_comp_lag_sc",
"reap" = "ER_reapp_sc",
"reap_lag" = "ER_reapp_lag_sc")
df_mod <- df_mod_long %>%
pivot_wider(id_cols = c("Participant", "perc_pass"),
names_from = "t", values_from = c("neg", "neg_lag", "reap", "reap_lag"))
If we look at the relation between lagged neg emotion and ER and vice versa, it goes in the opposite direction at t1 and t2, but then stays in the expected direction after that. I will experiment with removing these time points from the modeling later on. It is also super interesting that the relationship between reappraisal and subsequent negative emotion seems to increase as time goes on. This is super puzzling. At first glance, it is counterintuitive that there should be a positive relationship at all between use of reappraisal and subsequent emotion since you would expect reappraisal to dampen negative emotion. It’s hard to make anything of this since we are not controlling for prior negative emotion here. If someone has a greater degree of emotion at time t-1 they will likely use reappraisal more, but also have higher emotion at time t.
df_mod_long %>%
ggplot(aes(y = reap, x = neg_lag)) +
geom_point(position = position_jitter(.2), alpha = .4) +
geom_smooth(aes(color = t), method="gam", se = F, size = .9)
## Warning: Removed 216 rows containing non-finite values (stat_smooth).
## Warning: Removed 216 rows containing missing values (geom_point).
df_mod_long %>%
ggplot(aes(x = reap, y = neg)) +
geom_point(position = position_jitter(.2), alpha = .4) +
geom_smooth(aes(color = t), method="gam", se = F, size = .9)
Here we are exploring the relationship between reappraisal and subsequent negative emotion. When controlling for prior negative emotion, there does not appear to be any relationship at all. This could indicate that in the first day after receiving their result, reappraisal may be inaccessible. I should check if this result holds for distraction which is known to be more accessible in times of high intensity negative emotion. There is very strong autocorrelation in the negative emotion.
fit_neg <- lmer(neg ~ reap + neg_lag + (1|Participant), df_mod_long)
## boundary (singular) fit: see ?isSingular
coef_neg <- coef(fit_neg)
df_mod_long %>%
ggplot(aes(x = reap, y = neg)) +
geom_jitter(width = .1, alpha = .2) +
geom_abline(intercept = coef_neg$Participant$`(Intercept)`,
slope = coef_neg$Participant$reap, size = .2)
summary(fit_neg)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: neg ~ reap + neg_lag + (1 | Participant)
## Data: df_mod_long
##
## REML criterion at convergence: 951.8
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -4.0917 -0.4180 -0.0963 0.3688 5.0647
##
## Random effects:
## Groups Name Variance Std.Dev.
## Participant (Intercept) 0.0000 0.0000
## Residual 0.2174 0.4663
## Number of obs: 713, groups: Participant, 101
##
## Fixed effects:
## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) -0.006169 0.018132 710.000000 -0.340 0.734
## reap -0.009919 0.014466 710.000000 -0.686 0.493
## neg_lag 0.891359 0.015767 710.000000 56.534 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr) reap
## reap -0.220
## neg_lag -0.114 -0.176
## convergence code: 0
## boundary (singular) fit: see ?isSingular
Here we see a positive relationship between negative emotion and subsequent reappraisal usage as expected.
fit_er <- lmer(reap ~ neg_lag + reap_lag + (1|Participant), df_mod_long)
coef_er <- coef(fit_er)
df_mod_long %>%
ggplot(aes(y = reap, x = neg_lag)) +
geom_jitter(width = .1, alpha = .2) +
geom_abline(intercept = coef_er$Participant$`(Intercept)`,
slope = coef_er$Participant$`neg_lag`, size = .2) +
geom_abline(intercept = mean(coef_er$Participant$`(Intercept)`),
slope = mean(coef_er$Participant$`neg_lag`),
color = "red")
## Warning: Removed 216 rows containing missing values (geom_point).
summary(fit_er)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: reap ~ neg_lag + reap_lag + (1 | Participant)
## Data: df_mod_long
##
## REML criterion at convergence: 2012.4
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -2.9888 -0.4287 -0.1116 0.1147 4.8899
##
## Random effects:
## Groups Name Variance Std.Dev.
## Participant (Intercept) 0.2921 0.5405
## Residual 0.8138 0.9021
## Number of obs: 713, groups: Participant, 101
##
## Fixed effects:
## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) 0.14466 0.06583 76.77491 2.197 0.031007 *
## neg_lag 0.17562 0.04934 168.37671 3.559 0.000483 ***
## reap_lag 0.29542 0.03029 698.02297 9.755 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Correlation of Fixed Effects:
## (Intr) neg_lg
## neg_lag -0.113
## reap_lag -0.197 -0.086
riclpmModel_not1t2 <-
'
neg_lt =~ 1*neg_t3 + 1*neg_t4 +
1*neg_t5 + 1*neg_t6 +
1*neg_t7 + 1*neg_t8 +
1*neg_t9 + 1*neg_t10
reap_lt =~ 1*reap_t3 + 1*reap_t4 +
1*reap_t5 + 1*reap_t6+
1*reap_t7 + 1*reap_t8 +
1*reap_t9 + 1*reap_t10
#intercepts
neg_t3 ~ mu3 *1
neg_t4 ~ mu4 *1
neg_t5 ~ mu5 *1
neg_t6 ~ mu6 *1
neg_t7 ~ mu7 *1
neg_t8 ~ mu8 *1
neg_t9 ~ mu9 *1
neg_t10 ~ mu10*10
reap_t3 ~ pi3 *1
reap_t4 ~ pi4 *1
reap_t5 ~ pi5 *1
reap_t6 ~ pi6 *1
reap_t7 ~ pi7 *1
reap_t8 ~ pi8 *1
reap_t9 ~ pi9 *1
reap_t10~ pi10*1
neg_lt ~~ neg_lt #variance
reap_lt ~~ reap_lt #variance
neg_lt ~~ reap_lt #covariance
# latent vars for AR and cross-lagged effects
# each factor loading set to 1
neg_lt3 =~ 1*neg_t3
neg_lt4 =~ 1*neg_t4
neg_lt5 =~ 1*neg_t5
neg_lt6 =~ 1*neg_t6
neg_lt7 =~ 1*neg_t7
neg_lt8 =~ 1*neg_t8
neg_lt9 =~ 1*neg_t9
neg_lt10=~ 1*neg_t10
reap_lt3 =~ 1*reap_t3
reap_lt4 =~ 1*reap_t4
reap_lt5 =~ 1*reap_t5
reap_lt6 =~ 1*reap_t6
reap_lt7 =~ 1*reap_t7
reap_lt8 =~ 1*reap_t8
reap_lt9 =~ 1*reap_t9
reap_lt10=~ 1*reap_t10
# regressions
neg_lt10 ~ alpha*neg_lt9 + beta*reap_lt10
neg_lt9 ~ alpha*neg_lt8 + beta*reap_lt9
neg_lt8 ~ alpha*neg_lt7 + beta*reap_lt8
neg_lt7 ~ alpha*neg_lt6 + beta*reap_lt7
neg_lt6 ~ alpha*neg_lt5 + beta*reap_lt6
neg_lt5 ~ alpha*neg_lt4 + beta*reap_lt5
neg_lt4 ~ alpha*neg_lt3 + beta*reap_lt4
reap_lt10 ~ delta*reap_lt9 + gamma*neg_lt9
reap_lt9 ~ delta*reap_lt8 + gamma*neg_lt8
reap_lt8 ~ delta*reap_lt7 + gamma*neg_lt7
reap_lt7 ~ delta*reap_lt6 + gamma*neg_lt6
reap_lt6 ~ delta*reap_lt5 + gamma*neg_lt5
reap_lt5 ~ delta*reap_lt4 + gamma*neg_lt4
reap_lt4 ~ delta*reap_lt3 + gamma*neg_lt3
neg_lt ~ perc_pass
# variance
neg_lt3 ~~ neg_lt3
neg_lt4 ~~ u4 *neg_lt4
neg_lt5 ~~ u5 *neg_lt5
neg_lt6 ~~ u6 *neg_lt6
neg_lt7 ~~ u7 *neg_lt7
neg_lt8 ~~ u8 *neg_lt8
neg_lt9 ~~ u9 *neg_lt9
neg_lt10~~ u10*neg_lt10
reap_lt3 ~~ reap_lt3
reap_lt4 ~~ v4 *reap_lt4
reap_lt5 ~~ v5 *reap_lt5
reap_lt6 ~~ v6 *reap_lt6
reap_lt7 ~~ v7 *reap_lt7
reap_lt8 ~~ v8 *reap_lt8
reap_lt9 ~~ v9 *reap_lt9
reap_lt10~~ v10*reap_lt10
# covariance
# neg_lt3 ~~ reap_lt3
# neg_lt4 ~~ reap_lt4
# neg_lt5 ~~ reap_lt5
# neg_lt6 ~~ reap_lt6
# neg_lt7 ~~ reap_lt7
# neg_lt8 ~~ reap_lt8
# neg_lt9 ~~ reap_lt9
# neg_lt10 ~~ reap_lt10
'
fit <- lavaan(riclpmModel_not1t2, data = df_mod,
missing = 'ML', #for the missing data
int.ov.free = F,
int.lv.free = F,
auto.fix.first = F,
auto.fix.single = F,
auto.cov.lv.x = F,
auto.cov.y = F,
auto.var = F)
summary(fit, standardized = T)
## lavaan 0.6-5 ended normally after 106 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of free parameters 64
## Number of equality constraints 24
## Row rank of the constraints matrix 24
##
## Number of observations 101
## Number of missing patterns 28
##
## Model Test User Model:
##
## Test statistic 188.390
## Degrees of freedom 128
## P-value (Chi-square) 0.000
##
## Parameter Estimates:
##
## Information Observed
## Observed information based on Hessian
## Standard errors Standard
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## neg_lt =~
## neg_t3 1.000 0.973 0.909
## neg_t4 1.000 0.973 0.885
## neg_t5 1.000 0.973 0.923
## neg_t6 1.000 0.973 0.907
## neg_t7 1.000 0.973 0.963
## neg_t8 1.000 0.973 0.914
## neg_t9 1.000 0.973 0.893
## neg_t10 1.000 0.973 0.937
## reap_lt =~
## reap_t3 1.000 0.791 0.590
## reap_t4 1.000 0.791 0.659
## reap_t5 1.000 0.791 0.654
## reap_t6 1.000 0.791 0.688
## reap_t7 1.000 0.791 0.743
## reap_t8 1.000 0.791 0.715
## reap_t9 1.000 0.791 0.694
## reap_t10 1.000 0.791 0.770
## neg_lt3 =~
## neg_t3 1.000 0.445 0.416
## neg_lt4 =~
## neg_t4 1.000 0.512 0.465
## neg_lt5 =~
## neg_t5 1.000 0.404 0.384
## neg_lt6 =~
## neg_t6 1.000 0.452 0.421
## neg_lt7 =~
## neg_t7 1.000 0.274 0.271
## neg_lt8 =~
## neg_t8 1.000 0.432 0.405
## neg_lt9 =~
## neg_t9 1.000 0.491 0.450
## neg_lt10 =~
## neg_t10 1.000 0.364 0.350
## reap_lt3 =~
## reap_t3 1.000 1.082 0.807
## reap_lt4 =~
## reap_t4 1.000 0.903 0.752
## reap_lt5 =~
## reap_t5 1.000 0.914 0.756
## reap_lt6 =~
## reap_t6 1.000 0.834 0.726
## reap_lt7 =~
## reap_t7 1.000 0.711 0.669
## reap_lt8 =~
## reap_t8 1.000 0.774 0.699
## reap_lt9 =~
## reap_t9 1.000 0.821 0.720
## reap_lt10 =~
## reap_t10 1.000 0.654 0.638
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## neg_lt10 ~
## neg_lt9 (alph) 0.406 0.049 8.211 0.000 0.549 0.549
## rp_lt10 (beta) 0.001 0.021 0.043 0.965 0.002 0.002
## neg_lt9 ~
## neg_lt8 (alph) 0.406 0.049 8.211 0.000 0.357 0.357
## rep_lt9 (beta) 0.001 0.021 0.043 0.965 0.002 0.002
## neg_lt8 ~
## neg_lt7 (alph) 0.406 0.049 8.211 0.000 0.258 0.258
## rep_lt8 (beta) 0.001 0.021 0.043 0.965 0.002 0.002
## neg_lt7 ~
## neg_lt6 (alph) 0.406 0.049 8.211 0.000 0.671 0.671
## rep_lt7 (beta) 0.001 0.021 0.043 0.965 0.002 0.002
## neg_lt6 ~
## neg_lt5 (alph) 0.406 0.049 8.211 0.000 0.364 0.364
## rep_lt6 (beta) 0.001 0.021 0.043 0.965 0.002 0.002
## neg_lt5 ~
## neg_lt4 (alph) 0.406 0.049 8.211 0.000 0.514 0.514
## rep_lt5 (beta) 0.001 0.021 0.043 0.965 0.002 0.002
## neg_lt4 ~
## neg_lt3 (alph) 0.406 0.049 8.211 0.000 0.354 0.354
## rep_lt4 (beta) 0.001 0.021 0.043 0.965 0.002 0.002
## reap_lt10 ~
## rep_lt9 (delt) 0.220 0.048 4.605 0.000 0.275 0.275
## neg_lt9 (gamm) 0.106 0.086 1.236 0.216 0.080 0.080
## reap_lt9 ~
## rep_lt8 (delt) 0.220 0.048 4.605 0.000 0.207 0.207
## neg_lt8 (gamm) 0.106 0.086 1.236 0.216 0.056 0.056
## reap_lt8 ~
## rep_lt7 (delt) 0.220 0.048 4.605 0.000 0.202 0.202
## neg_lt7 (gamm) 0.106 0.086 1.236 0.216 0.038 0.038
## reap_lt7 ~
## rep_lt6 (delt) 0.220 0.048 4.605 0.000 0.258 0.258
## neg_lt6 (gamm) 0.106 0.086 1.236 0.216 0.068 0.068
## reap_lt6 ~
## rep_lt5 (delt) 0.220 0.048 4.605 0.000 0.241 0.241
## neg_lt5 (gamm) 0.106 0.086 1.236 0.216 0.052 0.052
## reap_lt5 ~
## rep_lt4 (delt) 0.220 0.048 4.605 0.000 0.217 0.217
## neg_lt4 (gamm) 0.106 0.086 1.236 0.216 0.060 0.060
## reap_lt4 ~
## rep_lt3 (delt) 0.220 0.048 4.605 0.000 0.263 0.263
## neg_lt3 (gamm) 0.106 0.086 1.236 0.216 0.052 0.052
## neg_lt ~
## prc_pss -0.019 0.002 -8.703 0.000 -0.019 -0.662
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .neg_lt ~~
## reap_lt 0.147 0.068 2.155 0.031 0.256 0.256
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .neg_t3 (mu3) 1.225 0.147 8.321 0.000 1.225 1.144
## .neg_t4 (mu4) 1.259 0.150 8.419 0.000 1.259 1.145
## .neg_t5 (mu5) 1.112 0.146 7.621 0.000 1.112 1.055
## .neg_t6 (mu6) 1.150 0.148 7.791 0.000 1.150 1.071
## .neg_t7 (mu7) 1.146 0.143 8.020 0.000 1.146 1.134
## .neg_t8 (mu8) 1.126 0.147 7.660 0.000 1.126 1.057
## .neg_t9 (mu9) 1.105 0.149 7.428 0.000 1.105 1.014
## .neg_t10 (mu10) 1.086 0.145 7.485 0.000 1.086 1.045
## .reap_t3 (pi3) 0.508 0.137 3.721 0.000 0.508 0.379
## .reap_t4 (pi4) 0.456 0.123 3.703 0.000 0.456 0.380
## .reap_t5 (pi5) 0.286 0.123 2.330 0.020 0.286 0.237
## .reap_t6 (pi6) 0.369 0.118 3.125 0.002 0.369 0.321
## .reap_t7 (pi7) 0.171 0.108 1.579 0.114 0.171 0.161
## .reap_t8 (pi8) 0.130 0.113 1.152 0.249 0.130 0.117
## .reap_t9 (pi9) 0.064 0.116 0.547 0.584 0.064 0.056
## .rep_t10 (pi10) 0.107 0.106 1.009 0.313 0.107 0.104
## .neg_lt 0.000 0.000 0.000
## reap_lt 0.000 0.000 0.000
## neg_lt3 0.000 0.000 0.000
## .neg_lt4 0.000 0.000 0.000
## .neg_lt5 0.000 0.000 0.000
## .neg_lt6 0.000 0.000 0.000
## .neg_lt7 0.000 0.000 0.000
## .neg_lt8 0.000 0.000 0.000
## .neg_lt9 0.000 0.000 0.000
## .ng_lt10 0.000 0.000 0.000
## rep_lt3 0.000 0.000 0.000
## .rep_lt4 0.000 0.000 0.000
## .rep_lt5 0.000 0.000 0.000
## .rep_lt6 0.000 0.000 0.000
## .rep_lt7 0.000 0.000 0.000
## .rep_lt8 0.000 0.000 0.000
## .rep_lt9 0.000 0.000 0.000
## .rp_lt10 0.000 0.000 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .neg_lt 0.532 0.081 6.581 0.000 0.561 0.561
## reap_lt 0.625 0.110 5.709 0.000 1.000 1.000
## neg_lt3 0.198 0.036 5.479 0.000 1.000 1.000
## .neg_lt4 (u4) 0.229 0.037 6.161 0.000 0.875 0.875
## .neg_lt5 (u5) 0.120 0.021 5.603 0.000 0.736 0.736
## .neg_lt6 (u6) 0.177 0.031 5.701 0.000 0.868 0.868
## .neg_lt7 (u7) 0.041 0.009 4.510 0.000 0.550 0.550
## .neg_lt8 (u8) 0.174 0.029 6.081 0.000 0.934 0.934
## .neg_lt9 (u9) 0.210 0.035 6.082 0.000 0.872 0.872
## .neg_lt10 (u10) 0.092 0.017 5.533 0.000 0.699 0.699
## reap_lt3 1.172 0.189 6.199 0.000 1.000 1.000
## .reap_lt4 (v4) 0.757 0.123 6.137 0.000 0.928 0.928
## .reap_lt5 (v5) 0.793 0.128 6.201 0.000 0.949 0.949
## .reap_lt6 (v6) 0.653 0.112 5.842 0.000 0.939 0.939
## .reap_lt7 (v7) 0.470 0.084 5.599 0.000 0.928 0.928
## .reap_lt8 (v8) 0.573 0.099 5.791 0.000 0.957 0.957
## .reap_lt9 (v9) 0.642 0.109 5.912 0.000 0.954 0.954
## .rep_lt10 (v10) 0.392 0.072 5.426 0.000 0.917 0.917
## .neg_t3 0.000 0.000 0.000
## .neg_t4 0.000 0.000 0.000
## .neg_t5 0.000 0.000 0.000
## .neg_t6 0.000 0.000 0.000
## .neg_t7 0.000 0.000 0.000
## .neg_t8 0.000 0.000 0.000
## .neg_t9 0.000 0.000 0.000
## .neg_t10 0.000 0.000 0.000
## .reap_t3 0.000 0.000 0.000
## .reap_t4 0.000 0.000 0.000
## .reap_t5 0.000 0.000 0.000
## .reap_t6 0.000 0.000 0.000
## .reap_t7 0.000 0.000 0.000
## .reap_t8 0.000 0.000 0.000
## .reap_t9 0.000 0.000 0.000
## .reap_t10 0.000 0.000 0.000
fitMeasures(fit)
## npar fmin chisq
## 40.000 0.933 188.390
## df pvalue baseline.chisq
## 128.000 0.000 1787.528
## baseline.df baseline.pvalue cfi
## 136.000 0.000 0.963
## tli nnfi rfi
## 0.961 0.961 0.888
## nfi pnfi ifi
## 0.895 0.842 0.964
## rni logl unrestricted.logl
## 0.963 -1425.343 -1331.148
## aic bic ntotal
## 2930.687 3035.291 101.000
## bic2 rmsea rmsea.ci.lower
## 2908.954 0.068 0.046
## rmsea.ci.upper rmsea.pvalue rmr
## 0.088 0.081 1.030
## rmr_nomean srmr srmr_bentler
## 1.086 0.103 0.103
## srmr_bentler_nomean crmr crmr_nomean
## 0.108 0.082 0.087
## srmr_mplus srmr_mplus_nomean cn_05
## 0.092 0.096 84.316
## cn_01 gfi agfi
## 91.140 0.697 0.598
## pgfi mfi ecvi
## 0.525 0.742 2.657
riclpmModel <-
'
neg_lt =~ 1*neg_t1 + 1*neg_t2 +
1*neg_t3 + 1*neg_t4 +
1*neg_t5 + 1*neg_t6 +
1*neg_t7 + 1*neg_t8 +
1*neg_t9 + 1*neg_t10
reap_lt =~ 1*reap_t1 + 1*reap_t2 +
1*reap_t3 + 1*reap_t4 +
1*reap_t5 + 1*reap_t6+
1*reap_t7 + 1*reap_t8 +
1*reap_t9 + 1*reap_t10
#intercepts
neg_t1 ~ mu1 *1
neg_t2 ~ mu2 *1
neg_t3 ~ mu3 *1
neg_t4 ~ mu4 *1
neg_t5 ~ mu5 *1
neg_t6 ~ mu6 *1
neg_t7 ~ mu7 *1
neg_t8 ~ mu8 *1
neg_t9 ~ mu9 *1
neg_t10 ~ mu3*10
reap_t1 ~ pi1 *1
reap_t2 ~ pi2 *1
reap_t3 ~ pi3 *1
reap_t4 ~ pi4 *1
reap_t5 ~ pi5 *1
reap_t6 ~ pi6 *1
reap_t7 ~ pi7 *1
reap_t8 ~ pi8 *1
reap_t9 ~ pi9 *1
reap_t10~ pi10*1
neg_lt ~~ neg_lt #variance
reap_lt ~~ reap_lt #variance
neg_lt ~~ reap_lt #covariance
# latent vars for AR and cross-lagged effects
# each factor loading set to 1
neg_lt1 =~ 1*neg_t1
neg_lt2 =~ 1*neg_t2
neg_lt3 =~ 1*neg_t3
neg_lt4 =~ 1*neg_t4
neg_lt5 =~ 1*neg_t5
neg_lt6 =~ 1*neg_t6
neg_lt7 =~ 1*neg_t7
neg_lt8 =~ 1*neg_t8
neg_lt9 =~ 1*neg_t9
neg_lt10=~ 1*neg_t10
reap_lt1 =~ 1*reap_t1
reap_lt2 =~ 1*reap_t2
reap_lt3 =~ 1*reap_t3
reap_lt4 =~ 1*reap_t4
reap_lt5 =~ 1*reap_t5
reap_lt6 =~ 1*reap_t6
reap_lt7 =~ 1*reap_t7
reap_lt8 =~ 1*reap_t8
reap_lt9 =~ 1*reap_t9
reap_lt10=~ 1*reap_t10
# regressions
neg_lt10 ~ alpha*neg_lt9 + beta*reap_lt10
neg_lt9 ~ alpha*neg_lt8 + beta*reap_lt9
neg_lt8 ~ alpha*neg_lt7 + beta*reap_lt8
neg_lt7 ~ alpha*neg_lt6 + beta*reap_lt7
neg_lt6 ~ alpha*neg_lt5 + beta*reap_lt6
neg_lt5 ~ alpha*neg_lt4 + beta*reap_lt5
neg_lt4 ~ alpha*neg_lt3 + beta*reap_lt4
neg_lt3 ~ alpha*neg_lt2 + beta*reap_lt3
neg_lt2 ~ alpha*neg_lt1 + beta*reap_lt2
reap_lt10 ~ delta*reap_lt9 + gamma*neg_lt9
reap_lt9 ~ delta*reap_lt8 + gamma*neg_lt8
reap_lt8 ~ delta*reap_lt7 + gamma*neg_lt7
reap_lt7 ~ delta*reap_lt6 + gamma*neg_lt6
reap_lt6 ~ delta*reap_lt5 + gamma*neg_lt5
reap_lt5 ~ delta*reap_lt4 + gamma*neg_lt4
reap_lt4 ~ delta*reap_lt3 + gamma*neg_lt3
reap_lt3 ~ delta*reap_lt2 + gamma*neg_lt2
reap_lt2 ~ delta*reap_lt1 + gamma*neg_lt1
neg_lt ~ perc_pass
# variance
neg_lt1 ~~ neg_lt1
neg_lt2 ~~ u2*neg_lt2
neg_lt3 ~~ u3*neg_lt3
neg_lt4 ~~ u2*neg_lt4
neg_lt5 ~~ u3*neg_lt5
neg_lt6 ~~ u2*neg_lt6
neg_lt7 ~~ u3*neg_lt7
neg_lt8 ~~ u2*neg_lt8
neg_lt9 ~~ u3*neg_lt9
neg_lt10~~ u2*neg_lt10
reap_lt1 ~~ reap_lt1
reap_lt2 ~~ v2*reap_lt2
reap_lt3 ~~ v3*reap_lt3
reap_lt4 ~~ v2*reap_lt4
reap_lt5 ~~ v3*reap_lt5
reap_lt6 ~~ v2*reap_lt6
reap_lt7 ~~ v3*reap_lt7
reap_lt8 ~~ v2*reap_lt8
reap_lt9 ~~ v3*reap_lt9
reap_lt10~~ v2*reap_lt10
# covariance
# neg_lt1 ~~ reap_lt1
# neg_lt2 ~~ reap_lt2
# neg_lt3 ~~ reap_lt3
# neg_lt4 ~~ reap_lt4
# neg_lt5 ~~ reap_lt5
# neg_lt6 ~~ reap_lt6
# neg_lt7 ~~ reap_lt7
# neg_lt8 ~~ reap_lt8
# neg_lt9 ~~ reap_lt9
# neg_lt10~~ reap_lt10
'
fit <- lavaan(riclpmModel, data = df_mod,
missing = 'ML', #for the missing data
int.ov.free = F,
int.lv.free = F,
auto.fix.first = F,
auto.fix.single = F,
auto.cov.lv.x = F,
auto.cov.y = F,
auto.var = F)
summary(fit, standardized = T)
## lavaan 0.6-5 ended normally after 86 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of free parameters 80
## Number of equality constraints 47
## Row rank of the constraints matrix 47
##
## Number of observations 101
## Number of missing patterns 30
##
## Model Test User Model:
##
## Test statistic 430.711
## Degrees of freedom 217
## P-value (Chi-square) 0.000
##
## Parameter Estimates:
##
## Information Observed
## Observed information based on Hessian
## Standard errors Standard
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## neg_lt =~
## neg_t1 1.000 0.984 0.872
## neg_t2 1.000 0.984 0.897
## neg_t3 1.000 0.984 0.921
## neg_t4 1.000 0.984 0.906
## neg_t5 1.000 0.984 0.923
## neg_t6 1.000 0.984 0.906
## neg_t7 1.000 0.984 0.923
## neg_t8 1.000 0.984 0.906
## neg_t9 1.000 0.984 0.923
## neg_t10 1.000 0.984 0.906
## reap_lt =~
## reap_t1 1.000 0.786 0.405
## reap_t2 1.000 0.786 0.625
## reap_t3 1.000 0.786 0.666
## reap_t4 1.000 0.786 0.653
## reap_t5 1.000 0.786 0.668
## reap_t6 1.000 0.786 0.653
## reap_t7 1.000 0.786 0.668
## reap_t8 1.000 0.786 0.653
## reap_t9 1.000 0.786 0.668
## reap_t10 1.000 0.786 0.653
## neg_lt1 =~
## neg_t1 1.000 0.552 0.489
## neg_lt2 =~
## neg_t2 1.000 0.486 0.443
## neg_lt3 =~
## neg_t3 1.000 0.417 0.390
## neg_lt4 =~
## neg_t4 1.000 0.461 0.424
## neg_lt5 =~
## neg_t5 1.000 0.411 0.385
## neg_lt6 =~
## neg_t6 1.000 0.460 0.423
## neg_lt7 =~
## neg_t7 1.000 0.411 0.385
## neg_lt8 =~
## neg_t8 1.000 0.460 0.423
## neg_lt9 =~
## neg_t9 1.000 0.411 0.385
## neg_lt10 =~
## neg_t10 1.000 0.460 0.423
## reap_lt1 =~
## reap_t1 1.000 1.774 0.914
## reap_lt2 =~
## reap_t2 1.000 0.980 0.780
## reap_lt3 =~
## reap_t3 1.000 0.881 0.746
## reap_lt4 =~
## reap_t4 1.000 0.913 0.758
## reap_lt5 =~
## reap_t5 1.000 0.877 0.745
## reap_lt6 =~
## reap_t6 1.000 0.912 0.758
## reap_lt7 =~
## reap_t7 1.000 0.877 0.745
## reap_lt8 =~
## reap_t8 1.000 0.912 0.758
## reap_lt9 =~
## reap_t9 1.000 0.877 0.745
## reap_lt10 =~
## reap_t10 1.000 0.912 0.758
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## neg_lt10 ~
## neg_lt9 (alph) 0.431 0.040 10.644 0.000 0.385 0.385
## rp_lt10 (beta) -0.018 0.017 -1.048 0.295 -0.036 -0.036
## neg_lt9 ~
## neg_lt8 (alph) 0.431 0.040 10.644 0.000 0.483 0.483
## rep_lt9 (beta) -0.018 0.017 -1.048 0.295 -0.038 -0.038
## neg_lt8 ~
## neg_lt7 (alph) 0.431 0.040 10.644 0.000 0.385 0.385
## rep_lt8 (beta) -0.018 0.017 -1.048 0.295 -0.036 -0.036
## neg_lt7 ~
## neg_lt6 (alph) 0.431 0.040 10.644 0.000 0.483 0.483
## rep_lt7 (beta) -0.018 0.017 -1.048 0.295 -0.038 -0.038
## neg_lt6 ~
## neg_lt5 (alph) 0.431 0.040 10.644 0.000 0.385 0.385
## rep_lt6 (beta) -0.018 0.017 -1.048 0.295 -0.036 -0.036
## neg_lt5 ~
## neg_lt4 (alph) 0.431 0.040 10.644 0.000 0.483 0.483
## rep_lt5 (beta) -0.018 0.017 -1.048 0.295 -0.038 -0.038
## neg_lt4 ~
## neg_lt3 (alph) 0.431 0.040 10.644 0.000 0.390 0.390
## rep_lt4 (beta) -0.018 0.017 -1.048 0.295 -0.036 -0.036
## neg_lt3 ~
## neg_lt2 (alph) 0.431 0.040 10.644 0.000 0.503 0.503
## rep_lt3 (beta) -0.018 0.017 -1.048 0.295 -0.038 -0.038
## neg_lt2 ~
## neg_lt1 (alph) 0.431 0.040 10.644 0.000 0.489 0.489
## rep_lt2 (beta) -0.018 0.017 -1.048 0.295 -0.036 -0.036
## reap_lt10 ~
## rep_lt9 (delt) 0.230 0.034 6.697 0.000 0.221 0.221
## neg_lt9 (gamm) 0.134 0.077 1.732 0.083 0.060 0.060
## reap_lt9 ~
## rep_lt8 (delt) 0.230 0.034 6.697 0.000 0.240 0.240
## neg_lt8 (gamm) 0.134 0.077 1.732 0.083 0.070 0.070
## reap_lt8 ~
## rep_lt7 (delt) 0.230 0.034 6.697 0.000 0.221 0.221
## neg_lt7 (gamm) 0.134 0.077 1.732 0.083 0.060 0.060
## reap_lt7 ~
## rep_lt6 (delt) 0.230 0.034 6.697 0.000 0.240 0.240
## neg_lt6 (gamm) 0.134 0.077 1.732 0.083 0.070 0.070
## reap_lt6 ~
## rep_lt5 (delt) 0.230 0.034 6.697 0.000 0.221 0.221
## neg_lt5 (gamm) 0.134 0.077 1.732 0.083 0.060 0.060
## reap_lt5 ~
## rep_lt4 (delt) 0.230 0.034 6.697 0.000 0.240 0.240
## neg_lt4 (gamm) 0.134 0.077 1.732 0.083 0.070 0.070
## reap_lt4 ~
## rep_lt3 (delt) 0.230 0.034 6.697 0.000 0.222 0.222
## neg_lt3 (gamm) 0.134 0.077 1.732 0.083 0.061 0.061
## reap_lt3 ~
## rep_lt2 (delt) 0.230 0.034 6.697 0.000 0.256 0.256
## neg_lt2 (gamm) 0.134 0.077 1.732 0.083 0.074 0.074
## reap_lt2 ~
## rep_lt1 (delt) 0.230 0.034 6.697 0.000 0.417 0.417
## neg_lt1 (gamm) 0.134 0.077 1.732 0.083 0.075 0.075
## neg_lt ~
## prc_pss -0.019 0.002 -9.104 0.000 -0.020 -0.684
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .neg_lt ~~
## reap_lt 0.120 0.067 1.782 0.075 0.213 0.213
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .neg_t1 (mu1) 1.406 0.149 9.407 0.000 1.406 1.246
## .neg_t2 (mu2) 1.367 0.147 9.312 0.000 1.367 1.245
## .neg_t3 (mu3) 1.203 0.143 8.431 0.000 1.203 1.126
## .neg_t4 (mu4) 1.279 0.147 8.732 0.000 1.279 1.177
## .neg_t5 (mu5) 1.147 0.145 7.897 0.000 1.147 1.075
## .neg_t6 (mu6) 1.189 0.147 8.082 0.000 1.189 1.094
## .neg_t7 (mu7) 1.195 0.146 8.211 0.000 1.195 1.121
## .neg_t8 (mu8) 1.178 0.147 8.018 0.000 1.178 1.084
## .neg_t9 (mu9) 1.178 0.145 8.125 0.000 1.178 1.104
## .neg_t10 (mu3) 1.203 0.143 8.431 0.000 1.203 1.107
## .reap_t1 (pi1) 1.262 0.193 6.536 0.000 1.262 0.650
## .reap_t2 (pi2) 0.604 0.127 4.767 0.000 0.604 0.481
## .reap_t3 (pi3) 0.506 0.120 4.212 0.000 0.506 0.428
## .reap_t4 (pi4) 0.444 0.124 3.592 0.000 0.444 0.368
## .reap_t5 (pi5) 0.282 0.120 2.359 0.018 0.282 0.239
## .reap_t6 (pi6) 0.365 0.124 2.942 0.003 0.365 0.303
## .reap_t7 (pi7) 0.164 0.120 1.365 0.172 0.164 0.140
## .reap_t8 (pi8) 0.122 0.123 0.993 0.321 0.122 0.102
## .reap_t9 (pi9) 0.055 0.120 0.459 0.647 0.055 0.047
## .rep_t10 (pi10) 0.106 0.125 0.849 0.396 0.106 0.088
## .neg_lt 0.000 0.000 0.000
## reap_lt 0.000 0.000 0.000
## neg_lt1 0.000 0.000 0.000
## .neg_lt2 0.000 0.000 0.000
## .neg_lt3 0.000 0.000 0.000
## .neg_lt4 0.000 0.000 0.000
## .neg_lt5 0.000 0.000 0.000
## .neg_lt6 0.000 0.000 0.000
## .neg_lt7 0.000 0.000 0.000
## .neg_lt8 0.000 0.000 0.000
## .neg_lt9 0.000 0.000 0.000
## .ng_lt10 0.000 0.000 0.000
## rep_lt1 0.000 0.000 0.000
## .rep_lt2 0.000 0.000 0.000
## .rep_lt3 0.000 0.000 0.000
## .rep_lt4 0.000 0.000 0.000
## .rep_lt5 0.000 0.000 0.000
## .rep_lt6 0.000 0.000 0.000
## .rep_lt7 0.000 0.000 0.000
## .rep_lt8 0.000 0.000 0.000
## .rep_lt9 0.000 0.000 0.000
## .rp_lt10 0.000 0.000 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .neg_lt 0.516 0.080 6.474 0.000 0.533 0.533
## reap_lt 0.618 0.109 5.653 0.000 1.000 1.000
## neg_lt1 0.304 0.050 6.070 0.000 1.000 1.000
## .neg_lt2 (u2) 0.180 0.014 13.150 0.000 0.762 0.762
## .neg_lt3 (u3) 0.130 0.011 11.619 0.000 0.748 0.748
## .neg_lt4 (u2) 0.180 0.014 13.150 0.000 0.849 0.849
## .neg_lt5 (u3) 0.130 0.011 11.619 0.000 0.768 0.768
## .neg_lt6 (u2) 0.180 0.014 13.150 0.000 0.852 0.852
## .neg_lt7 (u3) 0.130 0.011 11.619 0.000 0.768 0.768
## .neg_lt8 (u2) 0.180 0.014 13.150 0.000 0.852 0.852
## .neg_lt9 (u3) 0.130 0.011 11.619 0.000 0.768 0.768
## .neg_lt10 (u2) 0.180 0.014 13.150 0.000 0.852 0.852
## reap_lt1 3.149 0.462 6.821 0.000 1.000 1.000
## .reap_lt2 (v2) 0.789 0.060 13.204 0.000 0.820 0.820
## .reap_lt3 (v3) 0.721 0.059 12.256 0.000 0.929 0.929
## .reap_lt4 (v2) 0.789 0.060 13.204 0.000 0.947 0.947
## .reap_lt5 (v3) 0.721 0.059 12.256 0.000 0.938 0.938
## .reap_lt6 (v2) 0.789 0.060 13.204 0.000 0.947 0.947
## .reap_lt7 (v3) 0.721 0.059 12.256 0.000 0.938 0.938
## .reap_lt8 (v2) 0.789 0.060 13.204 0.000 0.948 0.948
## .reap_lt9 (v3) 0.721 0.059 12.256 0.000 0.938 0.938
## .reap_lt10 (v2) 0.789 0.060 13.204 0.000 0.948 0.948
## .neg_t1 0.000 0.000 0.000
## .neg_t2 0.000 0.000 0.000
## .neg_t3 0.000 0.000 0.000
## .neg_t4 0.000 0.000 0.000
## .neg_t5 0.000 0.000 0.000
## .neg_t6 0.000 0.000 0.000
## .neg_t7 0.000 0.000 0.000
## .neg_t8 0.000 0.000 0.000
## .neg_t9 0.000 0.000 0.000
## .neg_t10 0.000 0.000 0.000
## .reap_t1 0.000 0.000 0.000
## .reap_t2 0.000 0.000 0.000
## .reap_t3 0.000 0.000 0.000
## .reap_t4 0.000 0.000 0.000
## .reap_t5 0.000 0.000 0.000
## .reap_t6 0.000 0.000 0.000
## .reap_t7 0.000 0.000 0.000
## .reap_t8 0.000 0.000 0.000
## .reap_t9 0.000 0.000 0.000
## .reap_t10 0.000 0.000 0.000
fitMeasures(fit)
## npar fmin chisq
## 33.000 2.132 430.711
## df pvalue baseline.chisq
## 217.000 0.000 2340.293
## baseline.df baseline.pvalue cfi
## 210.000 0.000 0.900
## tli nnfi rfi
## 0.903 0.903 NA
## nfi pnfi ifi
## NA 0.843 0.899
## rni logl unrestricted.logl
## 0.900 -1962.976 -1747.621
## aic bic ntotal
## 3991.953 4078.251 101.000
## bic2 rmsea rmsea.ci.lower
## 3974.023 0.099 0.085
## rmsea.ci.upper rmsea.pvalue rmr
## 0.112 0.000 1.430
## rmr_nomean srmr srmr_bentler
## 1.494 0.147 0.147
## srmr_bentler_nomean crmr crmr_nomean
## 0.153 0.126 0.132
## srmr_mplus srmr_mplus_nomean cn_05
## 0.138 0.144 60.179
## cn_01 gfi agfi
## 63.935 0.679 0.627
## pgfi mfi ecvi
## 0.585 0.347 4.918
Open the discussion section with a paragraph summarizing the primary result from the confirmatory analysis and the assessment of whether it replicated, partially replicated, or failed to replicate the original result.
Overall, this reproduction attempt was successful. I was able to reproduce all of the model coefficients reported in Study 2 of the paper. However, I would not have been able to do this without referencing the analysis script provided online by the authors. The manuscript: > In Model 2, we used differentiation, regulation, their cross- level interaction, and percentage passed to predict negative emotion (separately for each strategy; six models). We included lagged negative emotion (at the previous time point) to model emotional change, again excluding overnight lags. We person-mean-centered regulation and lagged emotion and grand-mean-centered differentiation and percentage passed.
By looking at the author’s open analysis script online, I discovered that regulation and lagged emotion were not only person-mean-centered as reported, but subsequently grand-mean scaled as well. In addition, differentiation were not only grand-mean-centered, but also scaled by the standard deviation. This minor deviation (grand-mean-centered vs. grand-mean-scaled) also occurred in description of Model 1. That being said, these details do not substantively change or diminish the findings presented in the paper.
Add open-ended commentary (if any) reflecting (a) insights from follow-up exploratory analysis, (b) assessment of the meaning of the replication (or not) - e.g., for a failure to replicate, are the differences between original and present study ones that definitely, plausibly, or are unlikely to have been moderators of the result, and (c) discussion of any objections or challenges raised by the current and original authors about the replication attempt. None of these need to be long.
The reproduction attempt highlighted a few lessons for me:
This activity really illustrated the importance of open materials. I don’t think I could have reproduced the estimates in the paper without looking at the authors’ original code. Minor inconsistencies between manuscripts and the accompanying analysis pipelines are surely inevitable. This may be due to human error or it may simply be due to miscommunication between author and reader. By posting the analysis pipeline online, these inconsistencies can be resolved.
In addition, any scientist is well-acquainted with the feeling of finding something in the findings of a research report strange and wishing to investigate further. When materials are public, a reader has the opportunity to investigate these oddities further without the potentially prohibitive overhead of running a whole new study themselves. In the case of this project, I was able to dig further into strange findings regarding the acceptance strategy and come to a different interpretation than I otherwise would have been able to without the open data. In general, this type of exploratory analysis could feasibly open up whole new research programs that would have been left undiscovered if the opportunity had not been made available by the author. In this way, open data has the potential to push science forward.
Scaling is a powerful tool, but opens up many degrees of freedom that could potentially inflate type 1 error. This may be a function of my own ignorance, but it seems to be pretty ambiguous when and how one should center or scale values. There are situations when one might say scaling is certainly not necessary, but oftentimes one could justify going either direction. A researcher has the option to conduct any number or combination of scaling procedures (e.g. grand-mean scaling, grand-mean centering, person-mean scaling, person-mean centering, normalization between 0-1) and these could be done on any permutation of the variables being modeled. I found it odd that the authors of this paper would person-mean center the lagged emotion variable, but not the current emotion response variable. I would personally like to see a stronger statistically backed set of guidelines for exactly when and how scaling should or should not be conducted.
This dataset provided a fun opportunity to practice the random-intercept cross lagged panel model. The RI-CLPM allows you to test bidirectional temporal relationships between variables while controlling for autoregressive effects. In addition, the addition of a participant random intercept allows you to account for stability in individuals that can inflate the estimates of your variables of interest. Prior research has shown that reappraisal is effective for attenuating negative emotion in a lasting way. Other research has shown that people prefer reappraisal for low-intensity negative emotion. The fitted RI-CLP models modeling these bidirectional relationships fit only marginally well, with the CFI, RMSEA, and SRMR all teetering just a bit past the traditional cutoffs for SEM models. That being said, there may be an issue in that many participants in these data did not have negative emotions to regulate. Therefore, theoretically, we should expect to see a curvilinear relationship where reappraisal use is low for extreme values of negative emotion and high for midrange values of negative emotion. In any event, the exercise helped me to learn how to fit a model of this sort which will be useful for my future research.
sessionInfo()
## R version 3.5.3 (2019-03-11)
## Platform: x86_64-apple-darwin15.6.0 (64-bit)
## Running under: macOS Mojave 10.14.6
##
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib
##
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] lavaan_0.6-5 riclpmr_0.1.0.9000 lmerTest_3.1-0
## [4] lme4_1.1-21 Matrix_1.2-15 glue_1.3.1
## [7] broom.mixed_0.2.4 here_0.1 psych_1.8.12
## [10] haven_2.1.1 forcats_0.4.0 stringr_1.4.0
## [13] dplyr_0.8.3 purrr_0.3.3 readr_1.3.1
## [16] tidyr_1.0.0 tibble_2.1.3 ggplot2_3.2.1
## [19] tidyverse_1.2.1
##
## loaded via a namespace (and not attached):
## [1] minqa_1.2.4 colorspace_1.4-1 sjlabelled_1.1.1
## [4] rprojroot_1.3-2 estimability_1.3 htmlTable_1.13.1
## [7] parameters_0.2.0 base64enc_0.1-3 rstudioapi_0.10
## [10] ggrepel_0.8.1 mvtnorm_1.0-10 lubridate_1.7.4
## [13] xml2_1.2.2 splines_3.5.3 mnormt_1.5-5
## [16] knitr_1.25 sjmisc_2.8.2 zeallot_0.1.0
## [19] Formula_1.2-3 jsonlite_1.6 nloptr_1.2.1
## [22] ggeffects_0.13.0 broom_0.5.2 cluster_2.0.7-1
## [25] compiler_3.5.3 httr_1.4.1 emmeans_1.4.2
## [28] sjstats_0.17.7 backports_1.1.5 assertthat_0.2.1
## [31] lazyeval_0.2.2 cli_1.1.0 acepack_1.4.1
## [34] htmltools_0.4.0 tools_3.5.3 coda_0.19-2
## [37] gtable_0.3.0 reshape2_1.4.3 Rcpp_1.0.3
## [40] cellranger_1.1.0 vctrs_0.2.0 sjPlot_2.7.2
## [43] nlme_3.1-137 insight_0.7.0 xfun_0.10
## [46] irr_0.84.1 rvest_0.3.4 lpSolve_5.6.13.3
## [49] lifecycle_0.1.0 MASS_7.3-51.1 scales_1.0.0
## [52] hms_0.5.2 parallel_3.5.3 TMB_1.7.15
## [55] RColorBrewer_1.1-2 yaml_2.2.0 gridExtra_2.3
## [58] rpart_4.1-13 latticeExtra_0.6-28 stringi_1.4.3
## [61] bayestestR_0.4.0 checkmate_1.9.1 boot_1.3-20
## [64] rlang_0.4.1 pkgconfig_2.0.3 evaluate_0.14
## [67] lattice_0.20-38 htmlwidgets_1.5.1 labeling_0.3
## [70] tidyselect_0.2.5 plyr_1.8.4 magrittr_1.5
## [73] R6_2.4.0 generics_0.0.2 Hmisc_4.2-0
## [76] pillar_1.4.2 foreign_0.8-71 withr_2.1.2
## [79] mgcv_1.8-27 survival_2.43-3 nnet_7.3-12
## [82] performance_0.4.0 modelr_0.1.5 crayon_1.3.4
## [85] rmarkdown_1.16 grid_3.5.3 readxl_1.3.1
## [88] data.table_1.12.6 pbivnorm_0.6.0 digest_0.6.22
## [91] xtable_1.8-4 numDeriv_2016.8-1.1 stats4_3.5.3
## [94] munsell_0.5.0